<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Glotin, Hervé</style></author><author><style face="normal" font="default" size="100%">Spong, Paul</style></author><author><style face="normal" font="default" size="100%">Symonds, Helena</style></author><author><style face="normal" font="default" size="100%">Roger, Vincent</style></author><author><style face="normal" font="default" size="100%">Balestriero, Randall</style></author><author><style face="normal" font="default" size="100%">Ferrari, Maxence</style></author><author><style face="normal" font="default" size="100%">Poupard, Marion</style></author><author><style face="normal" font="default" size="100%">Towers, Jared</style></author><author><style face="normal" font="default" size="100%">Veirs, Scott</style></author><author><style face="normal" font="default" size="100%">Marxer, Ricard</style></author><author><style face="normal" font="default" size="100%">Giraudet, Pascale</style></author><author><style face="normal" font="default" size="100%">James Pilkinton</style></author><author><style face="normal" font="default" size="100%">Veirs, Val</style></author><author><style face="normal" font="default" size="100%">Wood, Jason</style></author><author><style face="normal" font="default" size="100%">Ford, John</style></author><author><style face="normal" font="default" size="100%">Dakin, Thomas</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Deep learning for ethoacoustical mapping: Application to a single Cachalot long term recording on joint observatories in Vancouver Island</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-09-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5067855</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">1776 - 1777</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;During February and March, 2018, a lone sperm whale known as Yukusam was recorded first by Orcalab in Johnstone Strait and subsequently on multiple hydrophones within the Salish Sea [1]. We learn and denoise these multichannel clicks trains with AutoEncoders Convolutional Neural Net (CNN). Then, we build a map of the echolocations to elucidate variations in the acoustic behavior of this unique animal over time, in different environments and distinct levels of boat noise. If CNN approximates an optimal kernel decomposition, it requires large amounts of data. Via spline functionals we offer analytics kernels with learnable coefficients do reduce it. We [1-3] identify the analytical mother wavelet to represent the input signal to directly learn the wavelet support from scratch by gradient descend on the parameters of cubic splines [2]. Supplemental material http://sabiod.org/yukusam [1] Balestriero, Roger, Glotin, Baraniuk, Semi-Supervised Learning via New Deep Network Inversion, arXiv preprint arXiv:1711.04313, 2017 [2] Balestriero, Cosentino, Glotin, Baraniuk, WaveletNet : Spline Filters for End-to-End Deep Learning, Int. Conf. on MachineLearning, ICML, Stockholm, http://sabiod.org/bib, 2018 [3] Spong P., Symonds H., et al., Joint Observatories Following a Single male Cachalot during 12 weeks&amp;mdash;The Yukusam story, ASA 2018.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Roger, Vincent</style></author><author><style face="normal" font="default" size="100%">Ferrari, Maxence</style></author><author><style face="normal" font="default" size="100%">Marxer, Ricard</style></author><author><style face="normal" font="default" size="100%">Chamroukhi, Faicel</style></author><author><style face="normal" font="default" size="100%">Glotin, Hervé</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Towards the topology of autoencoder of calls versus clicks of marine mammal</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-09-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5067859</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">1777 - 1778</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The goal is to learn the features and the representation adapted for cetacean sound dynamics without any priors. Thus, we develop data driven model to generate voicing and click of cetaceans audio signals. We learn representation and features of stationary or nonstationary emission using neural network from raw audio. We use different types of convolutions (causal, with strides, with dilation [1]), or gradient inversion [2]. Experiments are conducted on various kind of calls of humpback whales from nips4b challenge [3] or Orca whale. We compare the topology for transient encoding on Physeters and Inia g. For each model, we detail the resulting filters and discuss on the topology. We acknowledge Region PACA and NortekMED for Roger&amp;rsquo;s Phd grant, &amp;amp; DGA and R&amp;eacute;gion Haut de France for Ferrari&amp;rsquo;s Phd grant. [1] Oord, Dieleman, Zen, Simonyan, Vinyals, Graves et al. Wavenet : A generative model for raw audio, arXiv:1609.03499, 2016 [2] Balestriero, Roger, Glotin, Baraniuk, Semi-Supervised Learning via New Deep Network Inversion, arXiv:1711.04313, 2017 [3] Glotin, LeCun, Mallat et al. Proc. 1st wkp on Neural Information Processing for Bioacoustics NIPS4B, joint to NIPS Alberta USA, 2013 http://sabiod.org/nips4b/challenge2.html, http://sabiod.org/NIPS4B2013_book.pdf&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record></records></xml>