<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Roch, Marie A.</style></author><author><style face="normal" font="default" size="100%">Baumann-Pickering, Simone</style></author><author><style face="normal" font="default" size="100%">Cholewiak, Danielle</style></author><author><style face="normal" font="default" size="100%">Fleishman, Erica</style></author><author><style face="normal" font="default" size="100%">Frasier, Kaitlin E.</style></author><author><style face="normal" font="default" size="100%">Glotin, Hervé</style></author><author><style face="normal" font="default" size="100%">Helble, Tyler A.</style></author><author><style face="normal" font="default" size="100%">John A. Hildebrand</style></author><author><style face="normal" font="default" size="100%">Klinck, Holger</style></author><author><style face="normal" font="default" size="100%">Lindeneau, Scott</style></author><author><style face="normal" font="default" size="100%">Liu, Xiaobai</style></author><author><style face="normal" font="default" size="100%">Nosal, Eva-Marie</style></author><author><style face="normal" font="default" size="100%">Palmer, Kaitlin</style></author><author><style face="normal" font="default" size="100%">Shiu, Yu</style></author><author><style face="normal" font="default" size="100%">Singh, Gurisht</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The use of context in machine learning for bioacoustics</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-09-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5067665</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">1728 - 1728</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Biological acoustic signals are often produced in context. Contextual information includes things such as timing between calls, conspecific or interspecies cues, and physical environmental cues such as sunrise or sunset. We show how some forms of contextual information can be used to improve the results of detection and classification tasks for biological acoustic signals. We examine how context can be used to improve labeled data, resulting in more-accurate classification results, as well as how learners can exploit context directly. We demonstrate these improvements via two bioacoustic detection/classification tasks. The first algorithm detects odontocete echolocation clicks. We used a decision support system that allowed analysts to label echolocation clicks using between call timing cues as well as other measurements and found that deep learners trained with these high quality data are able to detect clicks in adverse environments. The second algorithm applies contextual information surrounding North Atlantic right whale upcalls to improve precision and recall. [This work was supported by ONR Grant Nos. N00014-17-1-2867 and N00014-15-1-2299.]&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record></records></xml>