<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Giorli, Giacomo</style></author><author><style face="normal" font="default" size="100%">Goetz, Kimberly T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Acoustically estimated size distribution of sperm whales ( &lt;i&gt;Physeter macrocephalus&lt;/i&gt;) off the east coast of New Zealand</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">cepstral analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">ceptrum</style></keyword><keyword><style  face="normal" font="default" size="100%">echolocation</style></keyword><keyword><style  face="normal" font="default" size="100%">length-frequency distribution</style></keyword><keyword><style  face="normal" font="default" size="100%">marine mammal</style></keyword><keyword><style  face="normal" font="default" size="100%">odontocete</style></keyword><keyword><style  face="normal" font="default" size="100%">PAM</style></keyword><keyword><style  face="normal" font="default" size="100%">passive acoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">sperm whale</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.tandfonline.com/doi/full/10.1080/00288330.2019.1679843https://www.tandfonline.com/doi/pdf/10.1080/00288330.2019.1679843</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The length-frequency distribution of sperm whales (Physeter macrocephalus) was studied on the east coast of NZ using passive acoustic recorders moored offshore of Kaikoura, Cape Palliser and Castlepoint. Sperm whale&amp;rsquo;s echolocation signals are unique among odontocete species. Their clicks are composed by multiple pulses resulting from the sound transmission within the whale head. The total length of the whales can be estimated by measuring the time delay between these pulses. A total of 997 length measurements were obtained from click trains using cepstral analysis (mean&amp;thinsp;=&amp;thinsp;14.6 m; min&amp;thinsp;=&amp;thinsp;9.6 m; max&amp;thinsp;=&amp;thinsp;18.3 m; std&amp;thinsp;=&amp;thinsp;1 m). The size-frequency distributions at all three locations were similar, although animals smaller than 12 m were not present offshore of Kaikoura. Animals of various sizes appeared to be present all year round, with no apparent seasonality in the occurrence of any size class.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">López-Baucells, Adrià</style></author><author><style face="normal" font="default" size="100%">Torrent, Laura</style></author><author><style face="normal" font="default" size="100%">Rocha, Ricardo</style></author><author><style face="normal" font="default" size="100%">Paulo E.D. Bobrowiec</style></author><author><style face="normal" font="default" size="100%">Jorge M. Palmeirim</style></author><author><style face="normal" font="default" size="100%">Christoph F.J. Meyer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Stronger together: Combining automated classifiers with manual post-validation optimizes the workload vs reliability trade-off of species identification in bat acoustic surveys</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Amazon</style></keyword><keyword><style  face="normal" font="default" size="100%">bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Chiroptera</style></keyword><keyword><style  face="normal" font="default" size="100%">echolocation</style></keyword><keyword><style  face="normal" font="default" size="100%">Machine-learning algorithms</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S1574954118300232</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Owing to major technological advances, bioacoustics has become a burgeoning field in ecological research worldwide. Autonomous passive acoustic recorders are becoming widely used to monitor aerial insectivorous bats, and automatic classifiers have emerged to aid researchers in the daunting task of analysing the resulting massive acoustic datasets. However, the scarcity of comprehensive reference call libraries still hampers their wider application in highly diverse tropical assemblages. Capitalizing on a unique acoustic dataset of &amp;gt;650,000 bat call sequences collected over a 3-year period in the Brazilian Amazon, the aims of this study were (a) to assess how pre-identified recordings of free-flying and hand-released bats could be used to train an automatic classification algorithm (random forest), and (b) to optimize acoustic analysis protocols by combining automatic classification with visual post-validation, whereby we evaluated the proportion of sound files to be post-validated for different thresholds of classification accuracy. Classifiers were trained at species or sonotype (group of species with similar calls) level. Random forest models confirmed the reliability of using calls of both free-flying and hand-released bats to train custom-built automatic classifiers. To achieve a general classification accuracy of ~85%, random forest had to be trained with at least 500 pulses per species/sonotype. For seven out of 20 sonotypes, the most abundant in our dataset, we obtained high classification accuracy (&amp;gt;90%). Adopting a desired accuracy probability threshold of 95% for the random forest classifier, we found that the percentage of sound files required for manual post-validation could be reduced by up to 75%, a significant saving in terms of workload. Combining automatic classification with manual ID through fully customizable classifiers implemented in open-source software as demonstrated here shows great potential to help overcome the acknowledged risks and biases associated with the sole reliance on automatic classification.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Chen, Xing</style></author><author><style face="normal" font="default" size="100%">Zhao, Jun</style></author><author><style face="normal" font="default" size="100%">Chen, Yan-hua</style></author><author><style face="normal" font="default" size="100%">Zhou, Wei</style></author><author><style face="normal" font="default" size="100%">Hughes, Alice C.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Automatic standardized processing and identification of tropical bat calls using deep learning approaches</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Automated monitoring</style></keyword><keyword><style  face="normal" font="default" size="100%">Automatic processing</style></keyword><keyword><style  face="normal" font="default" size="100%">bats</style></keyword><keyword><style  face="normal" font="default" size="100%">bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Biodiversity metrics</style></keyword><keyword><style  face="normal" font="default" size="100%">Calls</style></keyword><keyword><style  face="normal" font="default" size="100%">Deep learning</style></keyword><keyword><style  face="normal" font="default" size="100%">echolocation</style></keyword><keyword><style  face="normal" font="default" size="100%">machine learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Monitoring protocol</style></keyword><keyword><style  face="normal" font="default" size="100%">Neural network</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S0006320719308961</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Consistent and comparable metrics to automatically monitor biodiversity across the landscape remain a gold-standard for biodiversity research, yet such approaches have frequently been limited to a very small selection of species for which visual approaches (e.g., camera traps) make continuous monitoring possible. Acoustic-based methods have been widely applied in the monitoring of bats and some other taxa across extended spatial scales, but are have yet to be applied to diverse tropical communities.&lt;/p&gt;
&lt;p&gt;In this study, we developed a software program &amp;ldquo;Waveman&amp;rdquo; and prepared a reference library using over 880 audio-files from 36 Asian bat species. The software incorporated a novel network &amp;ldquo;BatNet&amp;rdquo; and a re-checking strategy (ReChk) to maximize accuracy. In Waveman, BatNet outperforms three other published networks: CNNFULL, VggNet and ResNet_v2, with over 90% overall accuracy and 0.94 AUC on the ROC plot. The classification accuracy rates for all 36 species are at least 86% when analysed in combination. Moreover, our library preparation and ReChk greatly improved the sensitivity and reduced the false positive rate, when tested with 15 species for which more detailed and situationally diverse records were available. Finally, BatNet was successfully used to identify Hipposideros larvatus and Rhinolophus siamensis from three different environments. We hope this pipeline is useful tool to process bioacoustic data accurately, effectively and automatically, therefore allowing for greater standardization and comparability for researchers to understand bat activities across space and time and therefore provide a consistent tool for monitoring biodiversity for management and conservation.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Eitan, O.</style></author><author><style face="normal" font="default" size="100%">Kosa, G.</style></author><author><style face="normal" font="default" size="100%">Yovel, Y.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Sensory gaze stabilization in echolocating bats</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">active sensing</style></keyword><keyword><style  face="normal" font="default" size="100%">bats</style></keyword><keyword><style  face="normal" font="default" size="100%">echolocation</style></keyword><keyword><style  face="normal" font="default" size="100%">gaze stabilization</style></keyword><keyword><style  face="normal" font="default" size="100%">sensory perception</style></keyword><keyword><style  face="normal" font="default" size="100%">tracking</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://royalsocietypublishing.org/doi/10.1098/rspb.2019.1496</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Sensing from a moving platform is challenging for both man-made machines and animals. Animals&amp;#39; heads jitter during movement, so if the sensors they carry are not stabilized, any spatial estimation might be biased. Flying animals, like bats, seriously suffer from this problem because flapping flight induces rapid changes in acceleration which moves the body up and down. For echolocating bats, the problem is crucial. Because they emit a sound to sense the world, an unstable head means sound energy pointed in the wrong direction. It is unknown how bats mitigate this problem. By tracking the head and body of flying fruit bats, we show that they stabilize their heads, accurately maintaining a fixed acoustic-gaze relative to a target. Bats can solve the stabilization task even in complete darkness using only echo-based information. Moreover, the bats point their echolocation beam below the target and not towards it, a strategy that should result in better estimations of target elevation.&lt;/p&gt;
</style></abstract></record></records></xml>