Objective
Several federally protected bat species range across numerous Department of Defense (DoD) properties and their presence presents a challenge for military operations. Many DoD land managers use automated passive acoustic monitoring to survey for bats, which produces copious data that present significant data management and analysis challenges. Therefore, managers use specialized automated software programs to analyze data to detect and identify species based on their echolocation calls. Recent advances in artificial intelligence—specifically, deep learning—have produced new models for bat call classification that outperform conventional software currently in use. The objective is to demonstrate deep learning models for bat echolocation call detection and classification to improve the accuracy and efficiency of automated acoustic analyses. Specifically, the project team will 1) train and validate two models to classify echolocation calls of North American bat species, 2) evaluate model performance in comparison to software currently used by practitioners, 3) assess the time- and cost-savings of the models compared to current methods, and 4) transition this technology for use by DoD natural resource managers.
Technology Description
Convolutional neural networks (CNN) are a class of deep neural networks that automatically learn from training data the features that are optimal for solving a given classification problem (e.g., identifying animal vocalizations), as opposed to conventional machine learning classifiers that require substantial human input to pre-process data, extract, and select features. The automated software programs currently used to classify bat calls primarily employ conventional machine learning classifiers to extract and measure call parameters (e.g., minimum frequency, maximum frequency, duration). Here, the project team will use a CNN architecture, called BirdNET, previously developed by project team members for the detection and identification of avian species. The CNN uses image recognition techniques to classify audio files depicted as spectrograms (plot of frequency over time). To train the CNN, it is fed batches of spectrograms and learns to elucidate patterns in the spectrograms to come to classification decisions. Although initially developed to identify bird songs, BirdNET can be trained to identify any species or sound of interest, in this case, bat calls.
Benefits
This demonstration will result in cost savings for military natural resource managers through reduced time and labor to analyze the copious data produced by acoustic surveys for bats. These cost savings will be achieved through increased technical efficacy (i.e., improved accuracy) of species identifications by automated software, therefore minimizing the number of files that must be manually vetted by trained or contracted personnel, reducing the workload to validate the veracity of acoustic datasets. As more bat species receive federal protection under the Endangered Species Act (ESA), the DoD must account for these species on its properties. For example, the tricolored bat was proposed endangered in September 2022 and the little brown bat is under review for listing. More than 300 installations will be potentially affected by at least one bat species listing under the ESA in the next five years. As passive acoustic monitoring is the primary survey methodology used to detect bats, improving the efficiency of acoustic data analyses will increase the reliability, objectivity, repeatability, and speed at which species can be identified on DoD lands. The demonstration will enable the DoD to take advantage of the latest developments in cutting-edge artificial intelligence technology.
Banner image bat photos provided with permission by Michael Durham.