Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 41 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 178 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Automatic acoustic identification of individual animals: Improving generalisation across species and recording conditions (1810.09273v1)

Published 22 Oct 2018 in cs.SD and eess.AS

Abstract: Many animals emit vocal sounds which, independently from the sounds' function, embed some individually-distinctive signature. Thus the automatic recognition of individuals by sound is a potentially powerful tool for zoology and ecology research and practical monitoring. Here we present a general automatic identification method, that can work across multiple animal species with various levels of complexity in their communication systems. We further introduce new analysis techniques based on dataset manipulations that can evaluate the robustness and generality of a classifier. By using these techniques we confirmed the presence of experimental confounds in situations resembling those from past studies. We introduce data manipulations that can reduce the impact of these confounds, compatible with any classifier. We suggest that assessment of confounds should become a standard part of future studies to ensure they do not report over-optimistic results. We provide annotated recordings used for analyses along with this study and we call for dataset sharing to be a common practice to enhance development of methods and comparisons of results.

Citations (7)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper's main contribution is a novel AAII method using structured data augmentation and feature learning to enhance recognition performance.
  • It demonstrates improved classifier robustness by mitigating background confounds through adversarial and stratified augmentation techniques.
  • The study emphasizes open data sharing and rigorous evaluation across varied vocal complexities and recording conditions.

Automatic Acoustic Identification of Individual Animals

This paper introduces a general method for automatic acoustic identification of individuals (AAII) across multiple animal species, addressing the challenge of generalizing across varying recording conditions and species vocal complexity. The paper emphasizes the importance of assessing and mitigating experimental confounds to avoid over-optimistic results. It also advocates for data sharing to enhance method development and result comparability.

Methods and Data Collection

The researchers used data from three bird species with different vocal complexities: little owl (Athene noctua), chiffchaff (Phylloscopus collybita), and tree pipit (Anthus trivialis). These species were chosen to assess the method's applicability across varying levels of vocal complexity and ecological niches. The recordings were conducted in Central European farmland and military training areas, ensuring favorable weather conditions and minimizing disturbance to the animals. Audio files were segmented into "foreground" (vocal activity present) and "background" (vocal activity absent) regions.

Structured Data Augmentation Techniques

The paper introduces two structured data augmentation methods to evaluate and reduce the confounding effect of background sound:

  • Adversarial Data Augmentation: Foreground recordings are mixed with background recordings from different individuals to assess the classifier's vulnerability to background noise confounds.
  • Stratified Data Augmentation: Training datasets are created by mixing each training item with background sound examples from every other individual, aiming to reduce correlations between foreground and background.

The paper also explores the use of background-only sound recordings to diagnose confounding-factor issues and to create an explicit "wastebasket" class during training, analogous to the universal background model (UBM) used in open-set recognition methods.

Automatic Classification and Evaluation

The automatic classification workflow involves converting audio files into mel spectrogram representations, applying unsupervised feature learning, and using a random forest classifier. The datasets were divided into training and evaluation sets, with evaluation data coming from different days or years than the training data. Performance was quantified using Receiver Operating Characteristic (ROC) analysis and the Area Under the Curve (AUC). The effect of adversarial data augmentation was probed in detail by measuring the Root Mean Square Error (RMS error) between the probabilities output from the classifier with and without augmentation.

Experimental Results

The paper was conducted in two phases. The first phase focused on comparing different interventions using chiffchaff datasets, evaluating performance within-year and across-year. The second phase evaluated the selected approach across the three species, comparing a basic classifier version against an improved version.

The results indicated that feature-learning and structured data augmentation significantly improved classifier performance and robustness to adversarial data augmentation. The improved classifier dramatically increased recognition performance of foreground recordings while maintaining stable recognition of background recordings, suggesting that the improvement was based on signal characteristics rather than confounding factors.

Discussion and Recommendations

The paper highlights that a single AAII approach can be successfully applied across different species with varying vocalization complexity. The paper also emphasizes the importance of assessing identification performance rigorously, especially in species with complex and variable songs like the chiffchaff.

Key recommendations for users of automatic classifiers, particularly for acoustic recognition of individuals, include:

  • Record and publish background segments for each individual.
  • Improve robustness through suitable input features and structured data augmentation.
  • Probe classifier robustness using background-only recognition, adversarial distraction with background, and across-year testing.
  • Consider species-specific vocalization characteristics.
  • Test both manual and learned features, recognizing their different generalization and performance characteristics.

Conclusion

The paper demonstrates that automatic acoustic identification of individual animals can be improved through structured data augmentation and feature learning, while also emphasizing the need to address potential confounding factors. The findings and recommendations provide valuable guidance for researchers and practitioners in the field of bioacoustics and wildlife monitoring. The call for open data sharing promotes collaborative progress in developing robust and generalizable AAII methods.