TY - JOUR T1 - hardRain: An R package for quick, automated rainfall detection in ecoacoustic datasets using a threshold-based approach Y1 - 2020 A1 - Metcalf, Oliver C. A1 - Lees, Alexander C. A1 - Barlow, Jos A1 - Marsden, Stuart J. A1 - Devenish, Christian KW - Acoustic pre-processing KW - bioacoustics KW - ecoacoustics KW - Environmental monitoring KW - Rain detection KW - Soundscape ecology AB -

The increasing demand for cost-efficient biodiversity data at large spatiotemporal scales has led to an increase in the collection of large ecoacoustic datasets. Whilst the ease of collection and storage of audio data has rapidly increased and costs fallen, methods for robust analysis of the data have not developed so quickly. Identification and classification of audio signals to species level is extremely desirable, but reliability can be highly affected by non-target noise, especially rainfall. Despite this demand, there are few easily applicable pre-processing methods available for rainfall detection for conservation practitioners and ecologists. Here, we use threshold values of two simple measures, Power Spectrum Density (amplitude) and Signal-to-Noise Ratio at two frequency bands, to differentiate between the presence and absence of heavy rainfall. We assess the effect of using different threshold values on Accuracy and Specificity. We apply the method to four datasets from both tropical and temperate regions, and find that it has up to 99% accuracy on tropical datasets (e.g. from the Brazilian Amazon), but performs less well in temperate environments. This is likely due to the intensity of rainfall in tropical forests and its falling on dense, broadleaf vegetation amplifying the sound. We show that by choosing between different threshold values, informed trade-offs can be made between Accuracy and Specificity, thus allowing the exclusion of large amounts of audio data containing rainfall in all locations without the loss of data not containing rain. We assess the impact of using different sample sizes of audio data to set threshold values, and find that 200 15 s audio files represents an optimal trade-off between effort, accuracy and specificity in most scenarios. This methodology and accompanying R package ‘hardRain’ is the first automated rainfall detection tool for pre-processing large acoustic datasets without the need for any additional rain gauge data.

UR - https://linkinghub.elsevier.com/retrieve/pii/S1470160X19307873 ER - TY - THES T1 - Acoustic classification of Australian frogs for ecosystem surveys T2 - School of Electrical Engineering and Computer Science Y1 - 2017 A1 - Jie Xie KW - Acoustic event detection KW - Acoustic feature KW - bioacoustics KW - Frog call classification KW - Multiple-instance multiple-label learning (MIML) KW - Multiple-label learning (ML) KW - Soundscape ecology KW - Syllable segmentation KW - Wavelet packet decomposition (WPD) AB -

Frogs play an important role in Earth’s ecosystem, but the decline of their population has been spotted at many locations around the world. Monitoring frog activity can assist con- servation efforts, and improve our understanding of their interactions with the environment and other organisms. Traditional observation methods require ecologists and volunteers to visit the field, which greatly limit the scale for acoustic data collection. Recent advances in acoustic sensors provide a novel method to survey vocalising animals such as frogs. Once sensors are successfully installed in the field, acoustic data can be automatically collected at large spatial and temporal scales. For each acoustic sensor, several gigabytes of compressed audio data can be generated per day, and thus large volumes of raw acoustic data are collected. To gain insights about frogs and the environment, classifying frog species in acoustic data is necessary. However, manual species identification is unfeasible due to the large amount of collected data, and enabling automated species classification has become very important. Previous studies on signal processing and machine learning for frog call classification often have two limitations: (1) the recordings used to train and test classifiers are trophy recordings ( signal-to-noise ratio (SNR) (≥ 15 dB); (2) each individual recording is assumed to contain only one frog species. However, field recordings typically have a low SNR (< 15 dB) and contain multiple simultaneously vocalising frog species. This thesis aims to address two limitations and makes the following contributions.
(1) Develop a combined feature set from temporal, perceptual, and cepstral domains for im- proving the state-of-the-art performance of frog call classification using trophy recordings (Chapter 3).
(2) Propose a novel cepstral feature via adaptive frequency scaled wavelet packet decompo- sition (WPD) to improve cepstral feature’s anti-noise ability for frog call classification using both trophy and field recordings (Chapter 4).
(3) Design a novel multiple-instance multiple-label (MIML) framework to classify multiple simultaneously vocalising frog species in field recordings (Chapter 5).
(4) Designanovelmultiple-label(ML)frameworktoincreasetherobustnessofclassification results when classifying multiple simultaneously vocalising frog species in field recordings (Chapter 6).

Our proposed approaches achieve promising classification results compared with previous studies. With our developed classification techniques, the ecosystem at large spatial and tem- poral scales can be surveyed, which can help ecologists better understand the ecosystem.

JF - School of Electrical Engineering and Computer Science PB - Queensland University of Technology ER - TY - THES T1 - ANALYSIS AND VISUALISATION OF VERY-LONG-DURATION ACOUSTIC RECORDINGS OF THE NATURAL ENVIRONMENT T2 - School of Electrical Engineering and Computer Science Y1 - 2018 A1 - Yvonne Phillips KW - acoustic indices KW - Anthropophony KW - bioacoustics KW - biophony KW - Cicadas KW - Clustering KW - Data reduction KW - Diel plots; Dot-matrix plots KW - ecoacoustics KW - Ecological Monitoring KW - Geophony KW - Long-duration false-colour spectrograms KW - Microphone malfunction KW - Principal Components Analysis KW - Soundscape ecology KW - Very-long-duration audio recording KW - Visualisation AB -

Advances in technology and reduction in data storage costs enable the autonomous collection of large quantities of continuous audio recordings. While the collection of very long environmental recordings has become easier, the analysis of these recordings remains challenging. A very-long-duration audio recording is defined as one with a minimum length of one day, but may have durations of weeks, months, or years. This thesis provides methods for data reduction and visualisation that enable the ecological interpretation and navigation of very-long-duration audio recordings.
The major theme of data reduction commenced after the establishment of protocols and the collection of two thirteen-month continuous audio recordings from two separate Southeast Queensland forest ecosystems. The acoustic indices calculated on one-minute audio segments were used to develop two new techniques to visualise the contents of very- long-duration recordings. An acoustic index is a mathematical expression used to measure a particular aspect of the energy distribution in audio recordings. Microphone failure in one channel was noticed shortly after the recording commenced. A method was established to detect microphone problems in long recordings.
A novel error measure was developed to detect seasonal and site differences and enable optimisation of the clustering based on seasonal and site differences in the data. Cluster interpretation on very-long-duration audio recordings is problematic because listening to large amounts of audio is time-consuming and therefore impractical. To overcome this, a series of five methods were developed to build on the interpretations made through listening. These methods enabled the allocation of an acoustic label to each cluster, resulting in a labelled acoustic sequence. This acoustic sequence was used to develop three additional visualisation techniques.
The culmination of the methods developed in this thesis was the six case studies. These extended the ecological interpretation of the acoustic sequence beyond those that were made through the visualisations. The case studies demonstrated that clustering can facilitate ecological interpretation of very-long-duration audio recordings.

JF - School of Electrical Engineering and Computer Science PB - Queensland University of Technology UR - https://eprints.qut.edu.au/123020/1/Yvonne_Phillips_Thesis.pdf ER - TY - JOUR T1 - Automatic identification of rainfall in acoustic recordings JF - Ecological Indicators Y1 - 2017 A1 - Bedoya, Carol A1 - Isaza, Claudia A1 - Daza, Juan M. A1 - López, José D. KW - bioacoustics KW - Environmental monitoring KW - Precipitation measuremen KW - Rain detection KW - Soundscape ecology AB -

The rainfall regime is one of the main abiotic components that can cause modifications in the breeding activity of animal species. It has a direct effect on the environmental conditions, and acts as a modifier of the landscape and soundscape. Variations in water quality and acidity, flooding, erosion, and sound distortion are usually related with the presence of rain. Thereby, ecological studies in populations and communities would benefit from improvements in the estimation of rainfall patterns throughout space and time.

In this paper, a method for automatic detection of rainfall in forests by using acoustic recordings is proposed. This approach is based on the estimation of the mean value and signal to noise ratio of the power spectral density in the frequency band in which the sound of the raindrops falling over the vegetation layers of the forest is more prominent (i.e. 600–1200 Hz). The results of this method were compared with human auditory identification and data provided by a pluviometer. We achieved a correlation of 95.23% between the data provided by the pluviometer and the predictions of a regression model. Furthermore, we attained a general accuracy between 92.90% and 99.98% when identifying different intensity levels of rainfall on recordings.

Nowadays, passive monitoring recorders have been extensively used to study of acoustic-based breeding processes of several animal species. Our method uses the signals acquired by these recorders in order to identify and quantify rainfall events in short and long time spans. The proposed approach will automatically provide information about the rainfall patterns experienced by target species based on audio recordings.

VL - 75 UR - http://linkinghub.elsevier.com/retrieve/pii/S1470160X16307117 JO - Ecological Indicators ER -