<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Vu, Thinh Tien</style></author><author><style face="normal" font="default" size="100%">Doherty, Paul F.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Using bioacoustics to monitor gibbons</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Dakrong</style></keyword><keyword><style  face="normal" font="default" size="100%">Gibbon</style></keyword><keyword><style  face="normal" font="default" size="100%">Nomascus</style></keyword><keyword><style  face="normal" font="default" size="100%">Occupancy model</style></keyword><keyword><style  face="normal" font="default" size="100%">primate</style></keyword><keyword><style  face="normal" font="default" size="100%">Smartphone</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s10531-021-02139-1</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Monitoring wildlife population trends is critical for the conservation of endangered species and measuring the efficacy of management activities. Recently, passive acoustic monitoring has emerged as a useful wildlife monitoring tool and automatic recorders have been used to detect the presence of gibbons in protected areas of Vietnam. However, these recording devices can be expensive, cumbersome, and difficult to operate in some areas with gibbons. Therefore, inexpensive, lightweight, and easily operated recording devices are needed for wildlife monitoring. In this study, we employed mobile smartphones to detect the presence and distribution, and to estimate the occurrence probability, of the northern yellow-cheeked crested gibbon (Nomascus annamensis) in Dakrong Nature Reserve (405.3 km2), Vietnam. We surveyed gibbons from February to July 2019, during the dry season, at 95 sites that were systematically spaced throughout the nature reserve. We used the software package, RAVEN, to analyze the sound data and to identify gibbon calls. We detected gibbon calls at 39 out of 95 recording sites. With these data and an occupancy model, we estimated, and examined the effects of environmental factors, on the occurrence probability. Assuming a 600 m detection distance, the model-averaged occurrence probability for the nature reserve was 0.44 (SE = 0.06). The area of rich (&amp;gt; 100 m3/ha) and medium (&amp;gt; 200 m3/ha) evergreen forest within 1 km of the recording posts was the most important predictor of, and posi- tively correlated with, occurrence with less occurrence in poor, regrowth forest, planta- tions, or on bare land. Bioacoustic methods can be potentially used in large-scale gibbon surveys, and the technology is especially attractive given the low cost. Additional work on estimating detection distances and identifying individual gibbon groups using bioacoustics will be useful next steps.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Riondato, Isidoro</style></author><author><style face="normal" font="default" size="100%">Gamba, Marco</style></author><author><style face="normal" font="default" size="100%">Tan, Chia L.</style></author><author><style face="normal" font="default" size="100%">Niu, Kefeng</style></author><author><style face="normal" font="default" size="100%">Narins, Peter M.</style></author><author><style face="normal" font="default" size="100%">Yang, Yeqin</style></author><author><style face="normal" font="default" size="100%">Giacoma, Cristina</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Allometric escape and acoustic signal features facilitate high-frequency communication in an endemic Chinese primate</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">acoustic adaptation hypothesis</style></keyword><keyword><style  face="normal" font="default" size="100%">Principle of acoustic allometry</style></keyword><keyword><style  face="normal" font="default" size="100%">Rhinopithecus brelichi</style></keyword><keyword><style  face="normal" font="default" size="100%">Snub-nosed monkey</style></keyword><keyword><style  face="normal" font="default" size="100%">Sound propagation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s00359-021-01465-7</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The principle of acoustic allometry&amp;mdash;the larger the animal, the lower its calls&amp;#39; fundamental frequency&amp;mdash;is generally observed across terrestrial mammals. Moreover, according to the Acoustic Adaptation Hypothesis, open habitats favor the propagation of high-frequency calls compared to habitats with complex vegetational structures. We carried out playback experiments in which the calls of the Guizhou snub-nosed monkey (Rhinopithecus brelichi) were used as stimuli in sound attenuation and degradation experiments to test the hypothesis that propagation of Guizhou snub-nosed monkey calls is favored above vs through the forest floor vegetation. We found that low-pitched Guizhou snub-nosed monkey vocalizations suffered less attenuation than its high-pitched calls. Guizhou snub-nosed monkeys were observed emitting high-pitched calls from 1.5 to 5.0 m above the ground. The use of high-pitched calls from these heights coupled with the concomitant behavior of moving about above the understory may provide a signal for receivers which maximizes potential transmission and efficacy. Our results support the Acoustic Adaptation Hypothesis and suggest that by uncoupling its vocal output from its size, this monkey can produce a high-pitched call with a broad spectral bandwidth, thereby increasing both its saliency and the frequency range over which the animal may more effectively communicate in its natural habitat.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lau, Allison R.</style></author><author><style face="normal" font="default" size="100%">Clink, Dena J.</style></author><author><style face="normal" font="default" size="100%">Bales, Karen L.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Individuality in the vocalizations of infant and adult coppery titi monkeys ( &lt;i&gt;Plecturocebus cupreus&lt;/i&gt;               )</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">discriminant function analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">pair bonding</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal duetting</style></keyword><keyword><style  face="normal" font="default" size="100%">vocalization individuality</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://onlinelibrary.wiley.com/doi/abs/10.1002/ajp.23134</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;As social animals, many primates use acoustic communication to maintain relationships. Vocal individuality has been documented in a diverse range of primate species and call types, many of which have presumably different functions. Auditory recognition of one&amp;#39;s neighbors may confer a selective advantage if identifying conspecifics decreases the need to participate in costly territorial behaviors. Alternatively, vocal individuality may be nonadaptive and the result of a unique combination of genetics and environment. Pair‐bonded primates, in particular, often participate in coordinated vocal duets that can be heard over long distances by neighboring conspecifics. In contrast to adult calls, infant vocalizations are short‐range and used for intragroup communication. Here, we provide two separate but complementary analyses of vocal individuality in distinct call types of coppery titi monkeys (Plecturocebus cupreus) to test whether individuality occurs in call types from animals of different age classes with presumably different functions. We analyzed 600 trill vocalizations from 30 infants and 169 pulse‐chirp duet vocalizations from 30 adult titi monkeys. We predicted that duet contributions would exhibit a higher degree of individuality than infant trills, given their assumed function for long‐distance, intergroup communication. We estimated 7 features from infant trills and 16 features from spectrograms of adult pulse‐chirps, then used discriminant function analysis with leave‐one‐out cross‐validation to classify individuals. We correctly classified infants with 48% accuracy and adults with 83% accuracy. To further investigate variance in call features, we used a multivariate variance components model to estimate variance partitioning in features across two levels: within‐ and between‐individuals. Between‐individual variance was the most important source of variance for all features in adults, and three of four features in infants. We show that pulse‐chirps of adult titi monkey duets are individually distinct, and infant trills are less individually distinct, which may be due to the different functions of the vocalizations.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Clink, Dena J.</style></author><author><style face="normal" font="default" size="100%">Ahmad, Abdul Hamid</style></author><author><style face="normal" font="default" size="100%">Klinck, Holger</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brevity is not a universal in animal communication: evidence for compression depends on the unit of analysis in small ape vocalizations</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">compression</style></keyword><keyword><style  face="normal" font="default" size="100%">Hylobates</style></keyword><keyword><style  face="normal" font="default" size="100%">Menzerath’s Law</style></keyword><keyword><style  face="normal" font="default" size="100%">unsupervised clustering</style></keyword><keyword><style  face="normal" font="default" size="100%">Zipf’s Law of abbreviation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://royalsocietypublishing.org/doi/10.1098/rsos.200151</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Evidence for compression, or minimization of code length, has been found across biological systems from genomes to human language and music. Two linguistic laws&amp;mdash;Menzerath&amp;rsquo;s Law (which states that longer sequences consist of shorter constituents) and Zipf&amp;rsquo;s Law of abbreviation (a negative relationship between signal length and frequency of use)&amp;mdash; are predictions of compression. It has been proposed that compression is a universal in animal communication, but there have been mixed results, particularly in reference to Zipf&amp;rsquo;s Law of abbreviation. Like songbirds, male gibbons (Hylobates muelleri) engage in long solo bouts with unique combinations of notes which combine into phrases. We found strong support for Menzerath&amp;rsquo;s Law as the longer a phrase, the shorter the notes. To identify phrase types, we used state-of-the-art affinity propagation clustering, and were able to predict phrase types using support vector machines with a mean accuracy of 74%. Based on unsupervised phrase type classification, we did not find support for Zipf&amp;rsquo;s Law of abbreviation. Our results indicate that adherence to linguistic laws in male gibbon solos depends on the unit of analysis. We conclude that principles of compression are applicable outside of human language, but may act differently across levels of organization in biological systems.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Mayer, Walter</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Gruppenverhalten von Totenkopfaffen unter besonderer Berücksichtigung der Kommunikationstheorie</style></title></titles><dates><year><style  face="normal" font="default" size="100%">1971</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/BF00288734</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;This paper presents an analysis of interactions of squirrel monkeys (Saimiri sciureus) by using the principles of bidirectional communication theory. This theory of Marko and Neuburger is a generalization of Shannon&amp;#39;s information theory and allows the description of the dominance relations of a two-partner situation. In the present case the observed behavioural sequences nearly correspond to Markoff chains of the 4th order. Two typical strategies regarding the exercise of dominance were observed during a change in dominance between two animals. A comparison with respective findings on social psychology is possible.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Amento, Brian</style></author><author><style face="normal" font="default" size="100%">Hill, Will</style></author><author><style face="normal" font="default" size="100%">Terveen, Loren</style></author></authors><tertiary-authors><author><style face="normal" font="default" size="100%">Terveen, Loren</style></author><author><style face="normal" font="default" size="100%">Wixon, Dennis</style></author></tertiary-authors></contributors><titles><title><style face="normal" font="default" size="100%">The sound of one hand</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2002</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://portal.acm.org/citation.cfm?doid=506443</style></url></web-urls></urls><isbn><style face="normal" font="default" size="100%">1581134541</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Two hundred and fifty years ago the Japanese Zen master Hakuin asked the question, &amp;quot;What is the Sound of the Single Hand?&amp;quot; This koan has long served as an aid to meditation but it also describes our new interaction techinique. We discovered that gentle fingertip gestures such as tapping, rubbing, and flicking make quiet sounds that travel by bone conduction throughout the hand. A small wristband-mounted contact microphone can reliably and inexpensively sense these sounds. We harnessed this &amp;quot;sound in the hand&amp;quot; phenomenon to build a wristband-mounted bio-acoustic fingertip gesture interface. The bio-acoustic interface recognizes some common gestures that state-of-the-art glove and image-processing techniques capture but in a smaller, mobile package.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Muir, Jen</style></author><author><style face="normal" font="default" size="100%">Barnett, Adrian</style></author><author><style face="normal" font="default" size="100%">Svensson, Magdalena S.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The Vocal Repertoire of Golden-Faced Sakis, Pithecia chrysocephala, and the Relationship Between Context and Call Structure</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Behavior</style></keyword><keyword><style  face="normal" font="default" size="100%">communication</style></keyword><keyword><style  face="normal" font="default" size="100%">Neotropics</style></keyword><keyword><style  face="normal" font="default" size="100%">Pitheciidae</style></keyword><keyword><style  face="normal" font="default" size="100%">Welfare</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s10764-019-00125-7</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Vocalizations are a vital form of communication. Call structure and use may change depending on emotional arousal, behavioral context, sex, or social complexity. Pithecia chrysocephala (golden-faced sakis) are a little-studied Neotropical species. We aimed to determine the vocal repertoire of P. chrysocephala and the influence of context on call structure. We collected data June&amp;ndash;August 2018 in an urban secondary forest fragment in Manaus, Amazonian Brazil. We took continuous vocal recordings in 10-min blocks with 5-min breaks during daily follows of two groups. We recorded scan samples of group behavior at the start and end of blocks and used ad libitum behavioral recording during blocks. We collected 70 h of data and analyzed 1500 calls. Lowest frequencies ranged 690.1&amp;ndash;5879 Hz in adults/subadults and 5393.6&amp;ndash;9497.8Hz in the only juvenile sampled. We identified eight calls, three of which were juvenile specific. We found that, while repertoire size was similar to that of other New World monkeys of similar group size and structure, it also resembled those with larger group sizes and different social structures. The durations of Chuck calls were shorter for feeding contexts compared to hostile, but frequencies were higher than predicted if call structure reflects motivation. This finding may be due to the higher arousal involved in hostile situations, or because P. chrysocephala use Chuck calls in appeasement, similar to behavior seen in other primates. Call structures did not differ between sexes, potentially linked to the limited size dimorphism in this species. Our findings provide a foundation for further investigation of Pithecia vocal behavior and phylogeny, as well as applications for both captive welfare (stress relief) and field research (playbacks for surveys).&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Schwartz, Jay W.</style></author><author><style face="normal" font="default" size="100%">Engelberg, Jonathan W. M.</style></author><author><style face="normal" font="default" size="100%">Gouzoules, Harold</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Was That a Scream? Listener Agreement and Major Distinguishing Acoustic Features</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">acoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Forced-choice task</style></keyword><keyword><style  face="normal" font="default" size="100%">Non-linguistic vocalization</style></keyword><keyword><style  face="normal" font="default" size="100%">Roughness</style></keyword><keyword><style  face="normal" font="default" size="100%">Scream</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s10919-019-00325-y</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Human screams have been suggested to comprise a salient and readily identified call type, yet few studies have explored the degree to which people agree on what constitutes a scream, and the defining acoustic structure of screams has not been fully determined. In this study, participants listened to 75 human vocal sounds, representing both a broad acoustical range and array of emotional contexts, and classified each as to whether it was a scream or not. Participants showed substantial agreement on which sounds were considered screams, consistent with the idea of screams as a basic call type. Agreement on classifications was related to participant gender, emotion processing accuracy, and empathy. To characterize the acoustic structure of screams, we measured the stimuli on 27 acoustic parameters. Principal components analysis and generalized linear mixed modeling indicated that classification as a scream was positively correlated with 3 acoustic dimensions: one corresponding to high pitch and roughness, another corresponding to wide fundamental frequency variability and narrow interquartile range bandwidth, and a third positively correlated with peak frequency slope. Twenty-six stimuli were agreed upon by &amp;gt;&amp;thinsp;90% of participants to be screams, but these were not acoustically homogeneous, and others evoked mixed responses. These results suggest that while screams might represent a salient and possibly innate call type, they also exhibit perceptual and acoustic gradation, perhaps reflecting the wide range of emotions and contexts in which they occur.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Schötz, Susanne</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Pastorinho, M. Ramiro</style></author><author><style face="normal" font="default" size="100%">Sousa, Ana Catarina A.</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Phonetic Variation in Cat–Human Communication</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Cat vocalisations</style></keyword><keyword><style  face="normal" font="default" size="100%">Cat–human communication</style></keyword><keyword><style  face="normal" font="default" size="100%">Phonetic description and transcription of cat sounds</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/978-3-030-30734-9</style></url></web-urls></urls><isbn><style face="normal" font="default" size="100%">978-3-030-30733-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;In this chapter, the phonetic variation of the vocal communication between domestic cats and humans is summarised and described, based on previous research as well as more recent studies and observations. Emphasis lies on classifying and describing the different vocalisation types of the cat using phonetic methods and terminology. The articulation, phonetic transcription and acoustic patterns of the most common vocalisation types are described. In addition, the segments (vowel and consonants), the prosody (the tone, intonation, rhythm and dynamics) of cat sounds as well as human perception of cat vocalisations is summarised.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Belyk, Michel</style></author><author><style face="normal" font="default" size="100%">Schultz, Benjamin G.</style></author><author><style face="normal" font="default" size="100%">Correia, Joao</style></author><author><style face="normal" font="default" size="100%">Beal, Deryk S.</style></author><author><style face="normal" font="default" size="100%">Kotz, Sonja A.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Whistling shares a common tongue with speech: bioacoustics from real-time MRI of the human vocal tract</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">communication</style></keyword><keyword><style  face="normal" font="default" size="100%">evolution</style></keyword><keyword><style  face="normal" font="default" size="100%">magnetic resonance imaging</style></keyword><keyword><style  face="normal" font="default" size="100%">speech</style></keyword><keyword><style  face="normal" font="default" size="100%">tongue</style></keyword><keyword><style  face="normal" font="default" size="100%">whistle</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://royalsocietypublishing.org/doi/10.1098/rspb.2019.1116https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2019.1116</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Most human communication is carried by modulations of the voice. However, a wide range of cultures has developed alternative forms of communication that make use of a whistled sound source. For example, whistling is used as a highly salient signal for capturing attention, and can have iconic cultural meanings such as the catcall, enact a formal code as in boatswain&amp;#39;s calls or stand as a proxy for speech in whistled languages. We used real-time magnetic resonance imaging to examine the muscular control of whistling to describe a strong association between the shape of the tongue and the whistled frequency. This bioacoustic profile parallels the use of the tongue in vowel production. This is consistent with the role of whistled languages as proxies for spoken languages, in which one of the acoustical features of speech sounds is substituted with a frequency-modulated whistle. Furthermore, previous evidence that non-human apes may be capable of learning to whistle from humans suggests that these animals may have similar sensorimotor abilities to those that are used to support speech in humans.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zürcher, Yvonne</style></author><author><style face="normal" font="default" size="100%">Willems, Erik P.</style></author><author><style face="normal" font="default" size="100%">Burkart, Judith M.</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Jon T. Sakata</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Are dialects socially learned in marmoset monkeys? Evidence from translocation experiments</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.plos.org/10.1371/journal.pone.0222486</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The acoustic properties of vocalizations in common marmosets differ between populations. These differences may be the result of social vocal learning, but they can also result from environmental or genetic differences between populations. We performed translocation experiments to separately quantify the influence of a change in the physical environment (experiment 1), and a change in the social environment (experiment 2) on the acoustic properties of calls from individual captive common marmosets. If population differences were due to genetic differences, we expected no change in the vocalizations of the translocated marmosets. If differences were due to environmental factors, we expected vocalizations to permanently change contingent with environmental changes. If social learning was involved, we expected that the vocalizations of animals translocated to a new population with a different dialect would become more similar to the new population. In experiment 1, we translocated marmosets to a different physical environment without changing the social composition of the groups or their neighbours. Immediately after the translocation to the new facility, one out of three call types showed a significant change in call structure, but 5&amp;ndash;6 weeks later, the calls were no longer different from before the translocation. Thus, the novel physical environment did not induce long lasting changes in the vocalizations of the marmosets. In experiment 2, we translocated marmosets to a new population with a different dialect. Importantly, our previous work had shown that these two populations differed significantly in vocalization structure. The translocated marmosets were still housed in their original social group, but after translocation they were surrounded by the vocalizations from neighbouring groups of the new population. The vocal distance between the translocated individuals and the new population decreased for two out of three call types over 16 weeks. Thus, even without direct social contact or interaction, the vocalizations of the translocated animals converged towards the new population, indicating that common marmosets can modify their calls due to acoustic input from conspecifics alone, via crowd vocal learning. To our knowledge, this is the first study able to distinguish between different explanations for vocal dialects as well as to show crowd vocal learning in a primate species.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zhang, Yisi S.</style></author><author><style face="normal" font="default" size="100%">Takahashi, Daniel Y.</style></author><author><style face="normal" font="default" size="100%">Liao, Diana A.</style></author><author><style face="normal" font="default" size="100%">Ghazanfar, Asif A.</style></author><author><style face="normal" font="default" size="100%">Elemans, Coen P. H.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Vocal state change through laryngeal development</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.nature.com/articles/s41467-019-12588-6</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Across vertebrates, progressive changes in vocal behavior during postnatal development are typically attributed solely to developing neural circuits. How the changing body influences vocal development remains unknown. Here we show that state changes in the contact vocalizations of infant marmoset monkeys, which transition from noisy, low frequency cries to tonal, higher pitched vocalizations in adults, are caused partially by laryngeal development. Combining analyses of natural vocalizations, motorized excised larynx experiments, tensile material tests and high-speed imaging, we show that vocal state transition occurs via a sound source switch from vocal folds to apical vocal membranes, producing louder vocalizations with higher efficiency. We show with an empirically based model of descending motor control how neural circuits could interact with changing laryngeal dynamics, leading to adaptive vocal development. Our results emphasize the importance of embodied approaches to vocal development, where exploiting biomechanical consequences of changing material properties can simplify motor control, reducing the computational load on the developing brain.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Parsons, Christine E.</style></author><author><style face="normal" font="default" size="100%">LeBeau, Richard T.</style></author><author><style face="normal" font="default" size="100%">Kringelbach, Morten L.</style></author><author><style face="normal" font="default" size="100%">Young, Katherine S.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Pawsitively sad: pet-owners are more sensitive to negative emotion in animal distress vocalizations</style></title><secondary-title><style face="normal" font="default" size="100%">Royal Society Open Science</style></secondary-title><short-title><style face="normal" font="default" size="100%">R. Soc. open sci.</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">animal vocalizations</style></keyword><keyword><style  face="normal" font="default" size="100%">cat miaows</style></keyword><keyword><style  face="normal" font="default" size="100%">crying</style></keyword><keyword><style  face="normal" font="default" size="100%">dog whines</style></keyword><keyword><style  face="normal" font="default" size="100%">emotion perception</style></keyword><keyword><style  face="normal" font="default" size="100%">pet-owners</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jun-08-2021</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://royalsocietypublishing.org/doi/10.1098/rsos.181555https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.181555</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">181555</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Pets have numerous, effective methods to communicate with their human hosts. Perhaps most conspicuous of these are distress vocalizations: in cats, the &amp;lsquo;miaow&amp;rsquo; and in dogs, the &amp;lsquo;whine&amp;rsquo; or &amp;lsquo;whimper&amp;rsquo;. We compared a sample of young adults who owned cats and or dogs (&amp;lsquo;pet-owners&amp;rsquo; n = 264) and who did not (n=297) on their ratings of the valence of animal distress vocalizations, taken from a standardized database of sounds. We also examined these participants&amp;rsquo; self-reported symptoms of anxiety and depression, and their scores on a measure of interpersonal relationship functioning. Pet-owners rated the animal distress vocalizations as sadder than adults who did not own a pet. Cat-owners specifically gave the most negative ratings of cat miaows compared with other participants, but were no different in their ratings of other sounds. Dog sounds were rated more negatively overall, in fact as negatively as human baby cries. Pet-owning adults (cat only, dog only, both) were not significantly different from adults with no pets on symptoms of depression, anxiety or on self- reported interpersonal relationship functioning. We suggest that pet ownership is associated with greater sensitivity to negative emotion in cat and dog distress vocalizations.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">8</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pomberger, Thomas</style></author><author><style face="normal" font="default" size="100%">Risueno-Segovia, Cristina</style></author><author><style face="normal" font="default" size="100%">Gultekin, Yasemin B.</style></author><author><style face="normal" font="default" size="100%">Dohmen, Deniz</style></author><author><style face="normal" font="default" size="100%">Hage, Steffen R.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Cognitive control of complex motor behavior in marmoset monkeys</style></title><secondary-title><style face="normal" font="default" size="100%">Nature Communications</style></secondary-title><short-title><style face="normal" font="default" size="100%">Nat Commun</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-12-2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.nature.com/articles/s41467-019-11714-8</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">10</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Marmosets have attracted significant interest in the life sciences. Similarities with human brain anatomy and physiology, such as the granular frontal cortex, as well as the development of transgenic lines and potential for transferring rodent neuroscientific techniques to small primates make them a promising neurodegenerative and neuropsychiatric model system. However, whether marmosets can exhibit complex motor tasks in highly controlled experimental designs&amp;mdash;one of the prerequisites for investigating higher-order control mechanisms underlying cognitive motor behavior&amp;mdash;has not been demonstrated. We show that marmosets can be trained to perform vocal behavior in response to arbitrary visual cues in controlled operant conditioning tasks. Our results emphasize the marmoset as a suitable model to study complex motor behavior and the evolution of cognitive control underlying speech.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Vu, Thinh Tien</style></author><author><style face="normal" font="default" size="100%">Tran, Long Manh</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An Application of Autonomous Recorders for Gibbon Monitoring</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Primatology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Int J Primatol</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Gibbon</style></keyword><keyword><style  face="normal" font="default" size="100%">Nomascu</style></keyword><keyword><style  face="normal" font="default" size="100%">primate</style></keyword><keyword><style  face="normal" font="default" size="100%">s Occupancy model</style></keyword><keyword><style  face="normal" font="default" size="100%">Song meter</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Oct-01-2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s10764-018-0073-3</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Population monitoring is very important in wildlife management and conservation. All 18 species of gibbons are considered threatened with extinction and listed on the IUCN Red List of Threatened Species. Thus, understanding and effectively monitoring their population trends and distribution are critical. Thus far, all gibbon surveying and monitoring programs have been conducted by human surveyors; this is expensive, laborious, and dependent on the surveyors&amp;rsquo; skills. In particular, estimating group density often requires a large sample size with several skilled observers working simultaneously in the field. We used autonomous recorders to record the calls of southern yellow-cheeked crested gibbon (Nomascus gabbrielae) for at least 3 days at each of 57 posts in Nam Cat Tien sector, Cat Tien National Park, Vietnam from July to October, 2016. We extracted gibbon calls from the recordings auditorily or visually using spectrograms in RAVEN software. We detected gibbon calls at 40 recording posts during the survey. The proportion of recorders with gibbon calls in the eastern region of Nam Cat Tien sector (mean&amp;thinsp;=&amp;thinsp;0.79; SE&amp;thinsp;=&amp;thinsp;0.13) was higher than that in the western region (mean&amp;thinsp;=&amp;thinsp;0.46; SE&amp;thinsp;=&amp;thinsp;0.11). The estimated probability of occurrence in the eastern region (&amp;psi;&amp;thinsp;=&amp;thinsp;0.56; SE&amp;thinsp;=&amp;thinsp;0.20) was higher than that in the western region (&amp;psi;&amp;thinsp;=&amp;thinsp;0.23; SE&amp;thinsp;=&amp;thinsp;0.16). Passive acoustic data were useful to investigate spatial variation in the probability of occurrence of gibbon. We recommend using autonomous recorders combined with occupancy model to complement human surveyors in gibbon monitoring in areas with low gibbon density because it is efficient, low cost, and not subject to errors caused by human surveyors. In the areas of high gibbon density, absolute density estimate achieved by human surveyors might be a more suitable indicator.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Bowling, D. L.</style></author><author><style face="normal" font="default" size="100%">M. Garcia</style></author><author><style face="normal" font="default" size="100%">Dunn, J. C.</style></author><author><style face="normal" font="default" size="100%">Ruprecht, R.</style></author><author><style face="normal" font="default" size="100%">Stewart, A.</style></author><author><style face="normal" font="default" size="100%">Frommolt, K.-H.</style></author><author><style face="normal" font="default" size="100%">Fitch, W. T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Body size and vocalization in primates and carnivores</style></title><secondary-title><style face="normal" font="default" size="100%">Scientific Reports</style></secondary-title><short-title><style face="normal" font="default" size="100%">Sci Rep</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-12-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.nature.com/articles/srep41070</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">7</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;A fundamental assumption in bioacoustics is that large animals tend to produce vocalizations with lower frequencies than small animals. This inverse relationship between body size and vocalization frequencies is widely considered to be foundational in animal communication, with prominent theories arguing that it played a critical role in the evolution of vocal communication, in both production and perception. A major shortcoming of these theories is that they lack a solid empirical foundation: rigorous comparisons between body size and vocalization frequencies remain scarce, particularly among mammals. We address this issue here in a study of body size and vocalization frequencies conducted across 91 mammalian species, covering most of the size range in the orders Primates (n&amp;thinsp;=&amp;thinsp;50; ~0.11&amp;ndash;120&amp;thinsp;Kg) and Carnivora (n&amp;thinsp;=&amp;thinsp;41; ~0.14&amp;ndash;250&amp;thinsp;Kg). We employed a novel procedure designed to capture spectral variability and standardize frequency measurement of vocalization data across species. The results unequivocally demonstrate strong inverse relationships between body size and vocalization frequencies in primates and carnivores, filling a long-standing gap in mammalian bioacoustics and providing an empirical foundation for theories on the adaptive function of call frequency in animal communication.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Enari, Hiroto</style></author><author><style face="normal" font="default" size="100%">Enari, Haruka S.</style></author><author><style face="normal" font="default" size="100%">Okuda, Kei</style></author><author><style face="normal" font="default" size="100%">Maruyama, Tetsuya</style></author><author><style face="normal" font="default" size="100%">Okuda, Kana N.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An evaluation of the efficiency of passive acoustic monitoring in detecting deer and primates in comparison with camera traps</style></title><secondary-title><style face="normal" font="default" size="100%">Ecological Indicators</style></secondary-title><short-title><style face="normal" font="default" size="100%">Ecological Indicators</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Cervus nippon</style></keyword><keyword><style  face="normal" font="default" size="100%">Ecoacoustic monitoring</style></keyword><keyword><style  face="normal" font="default" size="100%">Lag-phase management</style></keyword><keyword><style  face="normal" font="default" size="100%">Macaca fuscata</style></keyword><keyword><style  face="normal" font="default" size="100%">passive acoustic monitoring</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-03-2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S1470160X18309257https://api.elsevier.com/content/article/PII:S1470160X18309257?httpAccept=text/xmlhttps://api.elsevier.com/content/article/PII:S1470160X18309257?httpAccept=text/plain</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">98</style></volume><pages><style face="normal" font="default" size="100%">753 - 762</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;In recent years, camera traps have rapidly become popular for the large-scale monitoring of wildlife distributionand population; however, we should not ignore the uncertainty regarding the reliability of camera-basedmonitoring by inexperienced data gatherers. This study introduces passive acoustic monitoring (PAM) as aneasier technique for monitoring terrestrial mammals that uses the sound cues that they produce. To validate theefficacy of PAM, we quantitatively compared the detection areas and rates between sound cues (from PAM) andvisual cues (from camera traps) of two mammals&amp;mdash;the sika deerCervus nipponand the Japanese macaqueMacacafuscata&amp;mdash;across seven study sites in eastern Japan with different population densities. To collect sound cues, weset up multiple autonomous recording units at the sites and continuously recorded ambient sounds, following apre-determined schedule. The total recording time reached 9081 h for deer and 8235 h for macaques. We thenbuilt sound recognizers to automatically detect eight target call types from the recorded data. To collect visualcues, we also set multiple camera traps at the same sites and for the same observation periods. The keyfindingswere as follows: (1) the fully automated procedures that only used the recognizers to detect sound cues producednumerous false positive detections when the call type possessed vocal plasticity and variations; (2) the semi-automated procedures, which included an additional step to validate the automated detections by manualscreening, exhibited a great improvement in the detectability and recall rates of the half of the target calls,reaching &amp;gt; 0.70; (3) when using the semi-automated procedures, the frequency of deer and macaque detectionsper trap-day derived from the sound cues were in most cases approximately dozens of times and several times,respectively, higher than that derived from the visual cues; (4) the main advantage of PAM may be its superiordetection areas, which were 100&amp;ndash;7000 times wider than those of camera traps; and (5) the current success of therecognition of different call types of each species could broaden the use of PAM, which is not possible for cameratraps. PAM could provide socio-behavioral data (i.e., the frequencies and types of inter-individual vocal com-munications) that could help understand the status of population dynamics and the group compositions, inaddition to information related to the presence or absence of species.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Briseno-Jaramillo, M.</style></author><author><style face="normal" font="default" size="100%">Ramos-Fernández, G.</style></author><author><style face="normal" font="default" size="100%">Palacios-Romo, T. M.</style></author><author><style face="normal" font="default" size="100%">Sosa-López, J. R.</style></author><author><style face="normal" font="default" size="100%">Lemasson, A.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Age and social affinity effects on contact call interactions in free-ranging spider monkeys</style></title><secondary-title><style face="normal" font="default" size="100%">Behavioral Ecology and Sociobiology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Behav Ecol Sociobiol</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Acoustic matching</style></keyword><keyword><style  face="normal" font="default" size="100%">Call exchanges</style></keyword><keyword><style  face="normal" font="default" size="100%">New World monkeys</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal communication</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal learning</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-12-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s00265-018-2615-2</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">72</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Nonhuman primates&amp;rsquo; vocal repertoire has shown little plasticity, with immatures producing adult-like acoustic structures. Yet, the use of different call types shows a degree of socially dependent flexibility during development. In several nonhuman primate species, group members exchange contact calls respecting a set of social and temporal rules that may be learned (e.g., overlap avoidance, turn-taking, social selection of interacting partners, and call type matching). Here, we study the use of contact calls in free-living adult and immature (old and young) spider monkeys (Ateles geoffroyi). We focused our study in two contact call types of the species&amp;rsquo; repertoire: whinnies and high-whinnies. Our results suggest that individuals in all age classes produced both call types, with immatures producing less frequently the whinny call type. Immature individuals exchanged calls less often than adults, although their contribution increased with age. Conversely, mature individuals regulated their emissions by (1) exchanging more calls with their preferred affiliative partner and (2) matching the call type, while immatures did not. Our results show that contact call usage changes during development and suggest that adult rules might be learned. We argue that call matching is a &amp;ldquo;conversational rule&amp;rdquo; that young individuals acquire with apparent call-type-dependent variations during development. Our findings support the idea that social factors influence vocal development in nonhuman primates.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">12</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Adams, Dara B.</style></author><author><style face="normal" font="default" size="100%">Kitchen, Dawn M.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Experimental evidence that titi and saki monkey alarm calls deter an ambush predator</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Callicebus</style></keyword><keyword><style  face="normal" font="default" size="100%">interspecific communication</style></keyword><keyword><style  face="normal" font="default" size="100%">Leopardus pardalisocelot</style></keyword><keyword><style  face="normal" font="default" size="100%">perception advertisement</style></keyword><keyword><style  face="normal" font="default" size="100%">Pithecia</style></keyword><keyword><style  face="normal" font="default" size="100%">playback experiment</style></keyword><keyword><style  face="normal" font="default" size="100%">pursuit-deterrent signal</style></keyword><keyword><style  face="normal" font="default" size="100%">radiotelemetry</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-11-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S000334721830280Xhttps://api.elsevier.com/content/article/PII:S000334721830280X?httpAccept=text/xmlhttps://api.elsevier.com/content/article/PII:S000334721830280X?httpAccept=text/plain</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">145</style></volume><pages><style face="normal" font="default" size="100%">141 - 147</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Many animals use alarm calls in intraspecific communication to warn conspecifics of a predator&amp;#39;s presence or to elicit coordinated group responses. However, alarm calls may also be aimed directly at the predator to discourage further pursuit. These &amp;#39;pursuit-deterrent&amp;#39; signals are particularly important in the presence of ambush predators that rely on stealth to hunt prey. Here, we conducted playback experiments over a 16-month period on radiocollared ocelots, Leopardus pardalis, in Peru using audio stimuli of titi monkey (Callicebus toppini) and saki monkey (Pithecia rylandsi) alarm calls, with nonalarm loud calls as controls. We predicted that, if titi and saki alarm calls function as deterrent signals, then ocelots would move away from the sound source and leave the area following exposure to alarms but not following controls. We tracked ocelots via radiotelemetry for 30 min prior to and 30 min following experiments. At 15 min intervals we noted subject location, whether the cat was stationary or moving towards, away from or parallel to the playback area (calculated using a deflection angle) and distance travelled. Results showed a significantly different pattern in response movement between playback trials; ocelots moved away from the sound source in the majority of alarm trials but remained stationary/hidden or moved in a variety of directions following control trials. Ocelots also moved significantly farther following exposure to alarm trials than following exposure to controls. We conclude that ocelots can distinguish alarm calls from other loud calls and are deterred by alarm-calling monkeys. This is the first study to use playbacks on wild predators to test the pursuit-deterrent function of primate alarm calls.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Eliades, Steven</style></author><author><style face="normal" font="default" size="100%">Tsunada, Joji</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">From behavior to physiology and back again: The role of auditory cortex in vocal production and control</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-09-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5068317</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">1898 - 1899</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Vocal communication plays an important role in the lives of both humans and many animal species. Ensuring accurate communication, however, requires auditory self-monitoring to control vocal production and rapidly compensate for errors in vocal output. Despite the importance of this process, the underlying neural mechanisms are relatively unknown. Previous work has demonstrated that neurons in the auditory cortex are suppressed during vocal production, while simultaneously maintaining their sensitivity to vocal feedback, suggesting a role in auditory self-monitoring. The behavioral role of auditory cortex in vocal control, however, remains unclear. We investigated the function of auditory cortical activity during vocal self-monitoring and feedback-dependant vocal control in marmoset monkeys. Using real-time frequency-shifted feedback during vocalization, we demonstrate that marmosets exhibit rapid compensatory changes in vocal production, a feedback-dependent behavior that is predicted by the activities of neurons in auditory cortex. We further establish the role of auditory cortex in vocal control using electrical microstimulation to evoke rapid changes in produced vocalizations. These findings suggest a causal role for the auditory cortex in vocal self-monitoring and feedback-dependent vocal control, linking mechanisms of production and perception, and have important implications for understanding human speech motor control.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Burton, Jane A.</style></author><author><style face="normal" font="default" size="100%">Ramachandran, Ramnarayan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Neuronal frequency selectivity in the inferior colliculus and cochlear nucleus of the awake behaving macaque monkey</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-09-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5068466</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">1935 - 1935</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Frequency selectivity relates to the ability to process complex signals and can be measured through auditory filters. Behavioral filters show broader tuning compared to cochlear and auditory nerve fiber tuning. To test whether filters evolve across the auditory pathway or if they are established in the periphery, we estimated neural filters in the cochlear nucleus (CN) and inferior colliculus (IC) and compared with simultaneously measured behavioral filters in macaques. Three macaques were trained to detect tones (signal = unit characteristic frequency (CF)) in spectrally notched maskers of varying width while single unit responses were recorded in the CN and IC. Filter shapes and bandwidths were estimated from the masked thresholds using the rounded exponential fit. Behavioral and neural filters increased in bandwidth with increasing CF. Behavioral and neural bandwidths were significantly correlated and not significantly different from each other for the CN and IC. Neural filter bandwidths were variable across units and structures, possibly reflecting heterogeneity of neuronal encoding strategies. These findings support a model in which behavioral frequency selectivity is established early in the auditory pathway. These data form the baseline for ongoing studies of macaques with noise-induced hearing loss and future studies of emerging hearing loss therapeutics.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pomberger, Thomas</style></author><author><style face="normal" font="default" size="100%">Hage, Steffen R.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Semi-chronic laminar recordings in the brainstem of behaving marmoset monkeys</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neuroscience Methods</style></secondary-title><short-title><style face="normal" font="default" size="100%">Journal of Neuroscience Methods</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Callithrix jacchus</style></keyword><keyword><style  face="normal" font="default" size="100%">implanted electrodes</style></keyword><keyword><style  face="normal" font="default" size="100%">laminar multi-site microprobes</style></keyword><keyword><style  face="normal" font="default" size="100%">primates</style></keyword><keyword><style  face="normal" font="default" size="100%">semi-chronic recordings</style></keyword><keyword><style  face="normal" font="default" size="100%">single-unit recordings</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-10-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S0165027018303406https://api.elsevier.com/content/article/PII:S0165027018303406?httpAccept=text/xmlhttps://api.elsevier.com/content/article/PII:S0165027018303406?httpAccept=text/plain</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Background&lt;/p&gt;
&lt;p&gt;Chronic recordings with multi-electrode arrays are widely used to study neural networks underlying complex primate behaviors. Most of these systems are designed for studying neural activity in the cortical hemispheres resulting in a lack of devices being capable of simultaneously recording from ensembles of neurons in deep brainstem structures. However, to fully understand complex behavior, it is fundamental to also decipher the intrinsic mechanisms of the underlying motor pattern generating circuits in the brainstem.&lt;/p&gt;
&lt;p&gt;New Method&lt;/p&gt;
&lt;p&gt;We report a light-weight system that simultaneously measures single-unit activity from a large number of recording sites in the brainstem of marmoset monkeys. It includes a base chamber fixed to the animal&amp;rsquo;s skull and a removable upper chamber that can be semi-chronically mounted to the base chamber to flexibly position an embedded micro-drive containing a 32-channel laminar probe to record from various positions within the brainstem for several weeks.&lt;/p&gt;
&lt;p&gt;Results&lt;/p&gt;
&lt;p&gt;The current system is capable of simultaneously recording stable single-unit activity from a large number of recording sites in the brainstem of vocalizing marmoset monkeys.&lt;/p&gt;
&lt;p&gt;Comparison with Existing Methods&lt;/p&gt;
&lt;p&gt;To the best of our knowledge, chronic systems to record from deep brainstem structures with multi-site laminar probes in awake, behaving monkeys do not yet exist.&lt;/p&gt;
&lt;p&gt;Conclusions&lt;/p&gt;
&lt;p&gt;The semi-chronic implantation of laminar electrodes into the brainstem of behaving marmoset monkeys opens new research possibilities in fully understanding the neural mechanisms underlying complex behaviors in marmoset monkeys.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Bensoussan, Sandy</style></author><author><style face="normal" font="default" size="100%">Tigeot, Raphaëlle</style></author><author><style face="normal" font="default" size="100%">Lemasson, Alban</style></author><author><style face="normal" font="default" size="100%">Meunier-Salaün, Marie-Christine</style></author><author><style face="normal" font="default" size="100%">Tallet, Céline</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Domestic piglets (Sus scrofa domestica) are attentive to human voice and able to discriminate some prosodic features</style></title><secondary-title><style face="normal" font="default" size="100%">Applied Animal Behaviour Science</style></secondary-title><short-title><style face="normal" font="default" size="100%">Applied Animal Behaviour Science</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Auditory preferences</style></keyword><keyword><style  face="normal" font="default" size="100%">Choice test</style></keyword><keyword><style  face="normal" font="default" size="100%">Human-animal relationship</style></keyword><keyword><style  face="normal" font="default" size="100%">Piglets</style></keyword><keyword><style  face="normal" font="default" size="100%">Voice prosody</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-10-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://linkinghub.elsevier.com/retrieve/pii/S0168159118301485https://api.elsevier.com/content/article/PII:S0168159118301485?httpAccept=text/xmlhttps://api.elsevier.com/content/article/PII:S0168159118301485?httpAccept=text/plain</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Vocal communication is of major social importance in pigs. Their auditory sensitivity goes beyond the intraspecific level; studies have shown that domestic pigs are sensitive to and can learn to recognise human voices. The question of which prosodic features (intonation, accentuation, rhythm) of human speech may matter to this recognition, however, remains open. A total of 42 piglets were allocated to three experimental groups. Each piglet was submitted to three choice tests, during which different pairs of sounds were broadcast. Each group was first offered a choice between an unmodified (neutral) human voice and a background noise, in order to verify the attractiveness of human voice. We found that piglets could distinguish human voice; they gazed more rapidly (P&amp;thinsp;&amp;lt;&amp;thinsp;0.05) and for longer (P&amp;thinsp;&amp;lt;&amp;thinsp;0.05) in the direction of the human voice than in the direction of the background noise. Group 1 was then submitted to artificially modified voices: low vs high-pitched, and then slow vs rapid rhythm. Group 2 was submitted to artificially modified voices with a combination of these features: rapid and high-pitched vs slow and low-pitched, and then slow and high-pitched vs rapid and low-pitched. Group 3 was submitted to naturally recorded voices coding for different emotions (happiness vs anger) and then different intonations (interrogation vs command). We found that piglets approached the loudspeaker broadcasting the rapid rhythm (6&amp;thinsp;s (2&amp;ndash;32)) more rapidly than the loudspeaker broadcasting the slow rhythm (33&amp;thinsp;s (15&amp;ndash;70); p&amp;thinsp;&amp;lt;&amp;thinsp;0.05). They also spent more time near the loudspeaker broadcasting the &amp;ldquo;high-pitched and slow&amp;rdquo; voice (86&amp;thinsp;s (52&amp;ndash;110)) than near the one broadcasting the &amp;ldquo;low-pitched and rapid&amp;rdquo; voice (29&amp;thinsp;s (9&amp;ndash;73); W&amp;thinsp;=&amp;thinsp;86, P&amp;thinsp;&amp;lt;&amp;thinsp;0.05). In sum, the sensitivity of piglets for human prosody was moderate but not inexistent. Our results suggest that piglets base their responses to human voice on a combination of prosodic features.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Katsu, Noriko</style></author><author><style face="normal" font="default" size="100%">Yamada, Kazunori</style></author><author><style face="normal" font="default" size="100%">Okanoya, Kazuo</style></author><author><style face="normal" font="default" size="100%">Nakamichi, Masayuki</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Temporal adjustment of short calls according to a partner during vocal turn-taking in Japanese macaques</style></title><secondary-title><style face="normal" font="default" size="100%">Current Zoology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">primate</style></keyword><keyword><style  face="normal" font="default" size="100%">rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">turn-taking</style></keyword><keyword><style  face="normal" font="default" size="100%">vocalization</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Mar-10-2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://academic.oup.com/cz/advance-article/doi/10.1093/cz/zoy077/5132690</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Turn-taking is a common feature in human speech, and is also seen in the communication of other primate species. However, evidence of turn-taking in vocal exchanges within a short time frame is still scarce in nonhuman primates. This study investigated whether dynamic adjustment during turn-taking in short calls exists in Japanese macaques Macaca fuscata. We observed exchanges of short calls such as grunts, girneys, and short, low coos during social interactions in a free-ranging group of Japanese macaques. We found that the median gap between the turns of two callers was 250&amp;thinsp;ms. Call intervals varied among individuals, suggesting that call intervals were not fixed among individuals. Solo call intervals were shorter than call intervals interrupted by responses from partners (i.e., exchanges) and longer than those between the partner&amp;rsquo;s reply and the reply to that call, indicating that the monkeys did not just repeat calls at certain intervals irrespective of the social situation. The differences in call intervals during exchanged and solo call sequences were explained by the response interval of the partner, suggesting an adjustment of call timing according to the tempo of the partner&amp;rsquo;s call utterance. These findings suggest that monkeys display dynamic temporal adjustment in a short time window, which is comparable to turn-taking in human speech.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Adret, Patrice</style></author><author><style face="normal" font="default" size="100%">Dingess, Kimberly</style></author><author><style face="normal" font="default" size="100%">Caselli, Christini</style></author><author><style face="normal" font="default" size="100%">Vermeer, Jan</style></author><author><style face="normal" font="default" size="100%">Martínez, Jesus</style></author><author><style face="normal" font="default" size="100%">Luna Amancio, Jossy</style></author><author><style face="normal" font="default" size="100%">van Kuijk, Silvy</style></author><author><style face="normal" font="default" size="100%">Hernani Lineros, Lucero</style></author><author><style face="normal" font="default" size="100%">Wallace, Robert</style></author><author><style face="normal" font="default" size="100%">Fernandez-Duque, Eduardo</style></author><author><style face="normal" font="default" size="100%">Di Fiore, Anthony</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Duetting Patterns of Titi Monkeys (Primates, Pitheciidae: Callicebinae) and Relationships with Phylogeny</style></title><secondary-title><style face="normal" font="default" size="100%">Animals</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animals</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Callicebus</style></keyword><keyword><style  face="normal" font="default" size="100%">Cheracebus</style></keyword><keyword><style  face="normal" font="default" size="100%">conservation</style></keyword><keyword><style  face="normal" font="default" size="100%">Plecturocebus</style></keyword><keyword><style  face="normal" font="default" size="100%">taxonomy</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal communication</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-10-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.mdpi.com/2076-2615/8/10/178</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><pages><style face="normal" font="default" size="100%">178</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Long-range vocal communication in socially monogamous titi monkeys is mediated by the production of loud, advertising calls in the form of solos, duets, and choruses. We conducted a power spectral analysis of duets and choruses (simply &amp;ldquo;duets&amp;rdquo; hereafter) followed by linear discriminant analysis using three acoustic parameters&amp;mdash;dominant frequency of the combined signal, duet sequence duration, and pant call rate&amp;mdash;comparing the coordinated vocalizations recorded from 36 family groups at 18 sites in Bolivia, Peru and Ecuador. Our analysis identified four distinct duetting patterns: (1) a donacophilus pattern, sensu stricto, characteristic of P. donacophilus, P. pallescens, P. olallae, and P. modestus; (2) a moloch pattern comprising P. discolor, P. toppini, P. aureipalatii, and P. urubambensis; (3) a torquatus pattern exemplified by the duet of Cheracebus lucifer; and (4) the distinctive duet of&amp;nbsp;P. oenanthe, a putative member of the donacophilus group, which is characterized by a mix of broadband and narrowband syllables, many of which are unique to this species. We also document a sex-related difference in the bellow-pant phrase combination among the three taxa sampled from the moloch lineage. Our data reveal a presumptive taxonomic incoherence illustrated by the distinctive loud calls of both P. urubambensis and P. oenanthe within the donacophilus lineage, sensu largo. The results are discussed in light of recent reassessments of the callicebine phylogeny, based on a suite of genetic studies, and the potential contribution of environmental influences, including habitat acoustics and social learning. A better knowledge of callicebine loud calls may also impact the conservation of critically endangered populations, such as the vocally distinctive Peruvian endemic, the San Martin titi, P. oenanthe.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">10</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kessler, Sharon E</style></author><author><style face="normal" font="default" size="100%">Radespiel, Ute</style></author><author><style face="normal" font="default" size="100%">Hasiniaina, Alida I F</style></author><author><style face="normal" font="default" size="100%">Leliveld, Lisette M C</style></author><author><style face="normal" font="default" size="100%">Nash, Leanne T</style></author><author><style face="normal" font="default" size="100%">Zimmermann, Elke</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Modeling the origins of mammalian sociality: moderate evidence for matrilineal signatures in mouse lemur vocalizations.</style></title><secondary-title><style face="normal" font="default" size="100%">Front Zool</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Front. Zool.</style></alt-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">2014 Feb 20</style></date></pub-dates></dates><volume><style face="normal" font="default" size="100%">11</style></volume><pages><style face="normal" font="default" size="100%">14</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;b&gt;INTRODUCTION: &lt;/b&gt;Maternal kin selection is a driving force in the evolution of mammalian social complexity and it requires that kin are distinctive from nonkin. The transition from the ancestral state of asociality to the derived state of complex social groups is thought to have occurred via solitary foraging, in which individuals forage alone, but, unlike the asocial ancestors, maintain dispersed social networks via scent-marks and vocalizations. We hypothesize that matrilineal signatures in vocalizations were an important part of these networks. We used the solitary foraging gray mouse lemur (Microcebus murinus) as a model for ancestral solitary foragers and tested for matrilineal signatures in their calls, thus investigating whether such signatures are already present in solitary foragers and could have facilitated the kin selection thought to have driven the evolution of increased social complexity in mammals. Because agonism can be very costly, selection for matrilineal signatures in agonistic calls should help reduce agonism between unfamiliar matrilineal kin. We conducted this study on a well-studied population of wild mouse lemurs at Ankarafantsika National Park, Madagascar. We determined pairwise relatedness using seven microsatellite loci, matrilineal relatedness by sequencing the mitrochondrial D-loop, and sleeping group associations using radio-telemetry. We recorded agonistic calls during controlled social encounters and conducted a multi-parametric acoustic analysis to determine the spectral and temporal structure of the agonistic calls. We measured 10 calls for each of 16 females from six different matrilineal kin groups.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;RESULTS: &lt;/b&gt;Calls were assigned to their matriline at a rate significantly higher than chance (pDFA: correct&amp;thinsp;=&amp;thinsp;47.1%, chance&amp;thinsp;=&amp;thinsp;26.7%, p&amp;thinsp;=&amp;thinsp;0.03). There was a statistical trend for a negative correlation between acoustic distance and relatedness (Mantel Test: g&amp;thinsp;=&amp;thinsp;-1.61, Z&amp;thinsp;=&amp;thinsp;4.61, r&amp;thinsp;=&amp;thinsp;-0.13, p&amp;thinsp;=&amp;thinsp;0.058).&lt;/p&gt;
&lt;p&gt;&lt;b&gt;CONCLUSIONS: &lt;/b&gt;Mouse lemur agonistic calls are moderately distinctive by matriline. Because sleeping groups consisted of close maternal kin, both genetics and social learning may have generated these acoustic signatures. As mouse lemurs are models for solitary foragers, we recommend further studies testing whether the lemurs use these calls to recognize kin. This would enable further modeling of how kin recognition in ancestral species could have shaped the evolution of complex sociality.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Torti, Valeria</style></author><author><style face="normal" font="default" size="100%">Valente, Daria</style></author><author><style face="normal" font="default" size="100%">De Gregorio, Chiara</style></author><author><style face="normal" font="default" size="100%">Comazzi, Carlo</style></author><author><style face="normal" font="default" size="100%">Miaretsoa, Longondraza</style></author><author><style face="normal" font="default" size="100%">Ratsimbazafy, Jonah</style></author><author><style face="normal" font="default" size="100%">Giacoma, Cristina</style></author><author><style face="normal" font="default" size="100%">Gamba, Marco</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Reby, David</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Call and be counted! Can we reliably estimate the number of callers in the indri's (Indri indri) song?</style></title><secondary-title><style face="normal" font="default" size="100%">PLOS ONE</style></secondary-title><short-title><style face="normal" font="default" size="100%">PLoS ONE</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Mar-08-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://dx.plos.org/10.1371/journal.pone.0201664</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">13</style></volume><pages><style face="normal" font="default" size="100%">e0201664</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Estimating the number of animals participating in a choral display may contribute reliable information on animal population estimates, particularly when environmental or behavioral factors restrict the possibility of visual surveys. Difficulties in providing a reliable estimate of the number of singers in a chorus are many (e.g., background noise masking, overlap). In this work, we contributed data on the vocal chorusing of the indri lemurs (Indri indri), which emit howling cries, known as songs, uttered by two to five individuals. We examined whether we could estimate the number of emitters in a chorus by screening the fundamental frequency in the spectrograms and the total duration of the songs, and the reliability of those methods when compared to the real chorus size. The spectrographic investigation appears to provide reliable information on the number of animals participating in the chorusing only when this number is limited to two or three singers. We also found that the Acoustic Complexity Index positively correlated with the real chorus size, showing that an automated analysis of the chorus may provide information about the number of singers. We can state that song duration shows a correlation with the number of emitters but also shows a remarkable variation that remains unexplained. The accuracy of the estimates can reflect the high variability in chorus size, which could be affected by group composition, season and context. In future research, a greater focus on analyzing frequency change occurring during these collective vocal displays should improve our ability to detect individuals and allow a finer tuning of the acoustic methods that may serve for monitoring chorusing mammals.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">8</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Robakis, Efstathia</style></author><author><style face="normal" font="default" size="100%">Watsa, Mrinalini</style></author><author><style face="normal" font="default" size="100%">Erkenswick, Gideon</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Classification of producer characteristics in primate long calls using neural networks</style></title><secondary-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></secondary-title><short-title><style face="normal" font="default" size="100%">The Journal of the Acoustical Society of America</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-07-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://asa.scitation.org/doi/10.1121/1.5046526</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">144</style></volume><pages><style face="normal" font="default" size="100%">344 - 353</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Primate long calls are high-amplitude vocalizations that can be critical in maintaining intragroup contact and intergroup spacing, and can encode abundant information about a call&amp;#39;s producer, such as age, sex, and individual identity. Long calls of the wild emperor (Saguinus imperator) and saddleback (Leontocebus weddelli) tamarins were tested for these identity signals using artificial neural networks, machine-learning models that reduce subjectivity in vocalization classification. To assess whether modelling could be streamlined by using only factors which were responsible for the majority of variation within networks, each series of networks was re-trained after implementing two methods of feature selection. First, networks were trained and run using only the subset of variables whose weights accounted for &amp;ge;50% of each original network&amp;#39;s variation, as identified by the networks themselves. In the second, only variables implemented by decision trees in predicting outcomes were used. Networks predicted dependent variables above chance (&amp;ge;58.7% for sex, &amp;ge;69.2 for age class, and &amp;ge;38.8% for seven to eight individuals), but classification accuracy was not markedly improved by feature selection. Findings are discussed with regard to implications for future studies on identity signaling in vocalizations and streamlining of data analysis.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Piel, Alex K.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Temporal patterns of chimpanzee loud calls in the Issa Valley, Tanzania: Evidence of nocturnal acoustic behavior in wild chimpanzees</style></title><secondary-title><style face="normal" font="default" size="100%">American Journal of Physical Anthropology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Am J Phys Anthropol</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">great ape</style></keyword><keyword><style  face="normal" font="default" size="100%">pant hoot</style></keyword><keyword><style  face="normal" font="default" size="100%">passive acoustic monitoring</style></keyword><keyword><style  face="normal" font="default" size="100%">vocalization</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-07-2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://doi.wiley.com/10.1002/ajpa.v166.3</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">166</style></volume><pages><style face="normal" font="default" size="100%">530 - 540</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Objectives: Much is known about chimpanzee diurnal call patterns, but far less about night-time vocal behavior. I deployed a passive acoustic monitoring (PAM) system to assess 24-hr temporal acoustic activity of wild, unhabituated chimpanzees that live in a woodland mosaic habitat similar to hominin landscapes from the Plio-Pleistocene. A primary aim was to apply findings to our broader understanding to chimpanzee 24-hr activity patterns, and what implications this may have for reconstructing hominin adaptations to similarly hot, dry, and open landscapes. I also tested whether chimpanzees conform to the acoustic adaptation hypothesis, and produce loud calls dur- ing periods of optimal sound transmission.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
	Methods: Nine custom-made solar-powered acoustic transmission units (SPATUs) recorded con- tinuously for 250 days over 11 months in the Issa Valley, western Tanzania. I complemented acoustic data with environmental data from weather stations as well as behavioral data collected on chimpanzee nest group sizes to assess the relationship between party size and calling.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
	Results: Chimpanzees called at all hours of the day and night in both wet and dry seasons, and night and day calls exhibited parallel rates/month, although twilight calls were produced signifi- cantly more in the dry, compared to the wet season. Calls were more likely during warmer temperatures and lower humidity. Call rate was positively associated with (nest) party size and counter-calls exhibited no temporal variation in their origins (similar vs. adjacent valleys).&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
	Conclusions: Chimpanzees were acoustically active throughout the 24-hr cycle, although at low rates compared to diurnal activity, revealing night-time activity in an ape otherwise described as diurnal. Chimpanzee loud calls partially, and weakly, conformed to the acoustic adaptation hypoth- esis and likely responded to social, rather than environmental factors. Call rates accurately reflect grouping patterns and PAM is demonstrated to be an effective means of remotely assessing activ- ity, especially at times and from places that are difficult to access for researchers.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Fröhlich, Marlen</style></author><author><style face="normal" font="default" size="100%">Müller, Gudrun</style></author><author><style face="normal" font="default" size="100%">Zeiträg, Claudia</style></author><author><style face="normal" font="default" size="100%">Wittig, Roman M.</style></author><author><style face="normal" font="default" size="100%">Pika, Simone</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Gestural development of chimpanzees in the wild: the impact of interactional experience</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">chimpanzee</style></keyword><keyword><style  face="normal" font="default" size="100%">communication</style></keyword><keyword><style  face="normal" font="default" size="100%">development</style></keyword><keyword><style  face="normal" font="default" size="100%">gestures</style></keyword><keyword><style  face="normal" font="default" size="100%">interactional experience</style></keyword><keyword><style  face="normal" font="default" size="100%">ontogeny</style></keyword><keyword><style  face="normal" font="default" size="100%">Pan troglodytes</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-12-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://linkinghub.elsevier.com/retrieve/pii/S000334721630361X</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">134</style></volume><pages><style face="normal" font="default" size="100%">271 - 282</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;To understand the complexity involved in animal signalling, studies have mainly focused on repertoire size and information conveyed in vocalizations of birds and nonhuman primates. However, recent studies on gestural abilities of nonhuman primates have shown that we also need a detailed understanding of other communicative modalities and underlying cognitive skills to grasp this phenomenon in detail. Here, we thus examined gestural signalling of chimpanzees, Pan troglodytes, living in two communities in the wild (Kanyawara, Uganda; Ta&amp;iuml; South, Co^te d&amp;#39;Ivoire) with a special focus on the influence of the social environment on signal development. Specifically, we investigated to what extent specific social factors, namely behavioural context, interaction rates and maternal proximity, affect gestural production (i.e. gesture frequency, sequences and repertoire size). We used a combination of video recordings and focal scans obtained from 11 infants aged between 9 and 69 months during 1145 h of observation throughout two consecutive field periods. Overall, we found that social play was the context in which the highest number of gestures occurred. While gesture frequency and repertoire size increased with higher inter- action rates with nonmaternal conspecifics and the number of previous interaction partners, no effect was found for interaction rates with mothers. Our results thus imply that infants of social mothers may have a head start in life. Moreover, we provide hitherto undocumented evidence for sex differences in gestural signalling, which may reflect the differential importance of early socialization for chimpanzee males and females. Gestural development thus relies heavily on interactional experiences with con- specifics, which adds support for gestural acquisition via the learning mechanism of &amp;lsquo;social negotiation&amp;rsquo; in great apes.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Sperber, Anna Lucia</style></author><author><style face="normal" font="default" size="100%">Werner, Lynne M.</style></author><author><style face="normal" font="default" size="100%">Kappeler, Peter M.</style></author><author><style face="normal" font="default" size="100%">Fichtel, Claudia</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Wright, J.</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Grunt to go-Vocal coordination of group movements in redfronted lemurs</style></title><secondary-title><style face="normal" font="default" size="100%">Ethology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Ethology</style></short-title></titles><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-12-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://doi.wiley.com/10.1111/eth.2017.123.issue-12</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">123</style></volume><pages><style face="normal" font="default" size="100%">894 - 905</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;To remain cohesive as a group, individuals must coordinate their movements between resources. In many species, vocalisations are used in this context. While some species have specific movement calls, others use calls which are also employed in different contexts. The use of such multicontextual calls has rarely been studied quantitatively, especially during both the pre-departure and departure period associated with collective decisions. We thus investigated the use of close calls (&amp;ldquo;grunts&amp;rdquo;) for the coordination of collective movements in four groups of wild redfronted lemurs (Eulemur rufifrons) in Kirindy Forest, Western Madagascar. Group movements are started by an initiator, who moves away from the group and is joined by followers setting out in the same direction. We observed collective movements and recorded vocalisations from 18 focal individuals (54 movements recorded for followers, 21 for initiators). The grunt rate of both initiators and followers was higher in the pre-departure period than in a control context (i.e., during foraging). Initiators of collective movements grunted more often than followers in the pre-departure period as well as at individual departure. The latter difference was due to the initiators&amp;rsquo; grunt rates increasing earlier than the followers&amp;rsquo; and remaining at an elevated level for longer. These observations suggest that grunts serve to coordinate the departure by indicating the individual&amp;#39;s readiness to move. The pre-departure period, in which both initiators and followers showed an elevated grunt rate, may provide the basis for a shared decision on departure time. The difference in initiator and follower call rates suggests that grunts may have a recruitment function, but playback experiments are required to test this potential function. Overall, our study describes how multicontextual close calls can function as movement calls, with changes in call rate providing a potential feedback mechanism for the timing of group movements. This study thus contributes to a more detailed understanding of the mechanisms of group coordination and collective decision-making.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">12</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Brown, Charles H.</style></author><author><style face="normal" font="default" size="100%">Waser, Peter M.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Primate Habitat Acoustics</style></title></titles><keywords><keyword><style  face="normal" font="default" size="100%">acoustic adaptation hypothesis</style></keyword><keyword><style  face="normal" font="default" size="100%">ambient noise</style></keyword><keyword><style  face="normal" font="default" size="100%">amplitude fluctuation</style></keyword><keyword><style  face="normal" font="default" size="100%">animal vocalization</style></keyword><keyword><style  face="normal" font="default" size="100%">comparative bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Distortion</style></keyword><keyword><style  face="normal" font="default" size="100%">excess attenuation</style></keyword><keyword><style  face="normal" font="default" size="100%">reverberation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/978-3-319-59478-1</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><pub-location><style face="normal" font="default" size="100%">Cham</style></pub-location><volume><style face="normal" font="default" size="100%">63</style></volume><pages><style face="normal" font="default" size="100%">79 - 107</style></pages><isbn><style face="normal" font="default" size="100%">978-3-319-59476-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Natural habitats are not recording studios. Calls emitted in nature encounter an irregular assortment of hard surfaces that reflect and scatter the wave front, producing complicated patterns of constructive and destructive interference. The propagated wave front is subsequently disturbed by wind, thermal gradients, and atmospheric absorption. Collectively, these phenomena result in an unpredictable and untidy acoustic environment. Furthermore, thunder, rain, crashing waves, or the relentless chatter of biotic sources can result in high ambient-noise levels that may mask the signal, overwhelm the recipient, and obliterate significant nuances and embellishments. Thus, vocal communication is hampered by attenuation, reverberation, distortion, and acoustic disturbances. Accordingly, the twin components of vocal communication, sound production and acoustic perception, may have undergone persistent selection to counter the most prominent impediments to both hearing and being heard. Primates have radiated from rain forest to grassland and other habitats, and each habitat differs acoustically. Hence, there is reason to believe that the duration, amplitude, pitch, and composition of primate vocal repertoires, the timing of emissions, and the placement and orientation of vocalizers is not haphazard, but each has become tuned to the acoustic parameters of the natal habitat to heighten the clarity of vocal exchanges. This chapter begins with an overview of the acoustic properties of rain forest, riverine forest, and savanna habitats occupied by East African primates, which is followed by reviews of how primate calls become distorted when propagated in natural habitats and how distortion scores have been used to explore the acoustic adaptation hypothesis. Finally, significant opportunities for additional research are highlighted.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Bouchet, Hélène</style></author><author><style face="normal" font="default" size="100%">Koda, Hiroki</style></author><author><style face="normal" font="default" size="100%">Lemasson, Alban</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Age-dependent change in attention paid to vocal exchange rules in Japanese macaques</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">call matching</style></keyword><keyword><style  face="normal" font="default" size="100%">Japanese Macaque</style></keyword><keyword><style  face="normal" font="default" size="100%">nonhuman primate</style></keyword><keyword><style  face="normal" font="default" size="100%">playback experiment</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal development</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal exchange</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-07-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://linkinghub.elsevier.com/retrieve/pii/S0003347217301495</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">129</style></volume><pages><style face="normal" font="default" size="100%">81 - 92</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The mechanisms underlying vocal development in nonhuman primates, so-called &amp;lsquo;nonlearners&amp;rsquo;, are of special interest because they give an insight in how social factors can shape the expression of an already genetically determined vocal repertoire. Interestingly, recent studies suggest that the acquisition of the complex rules governing vocal exchanges (i.e. context-specific temporal and structural acoustic adjustments) may result from a socially guided development process, with social experience and parental selective feedback playing a key role. Among those conversational rules, call matching is a particularly remarkable phenomenon in Japanese macaques, Macaca fuscata, with interacting adult females matching the frequency range pattern of their own call with another female&amp;#39;s preceding call. Here, we investigated whether fine-tuned acoustic adjustments during vocal exchanges in Japanese macaques are subject to developmental processes, specifically testing for the ability of individuals of different age classes to discriminate between vocal exchanges respecting, or not, the matching rule. We performed playback experiments in 10 adult and 10 1-year-old captive Japanese macaque females. Each subject was successively exposed to two stimuli: a pair of calls respecting call matching (i.e. two calls from two individuals with matched frequency ranges) and another pair of calls that did not. Adults discriminated better than juveniles whether stimuli respected the call-matching rule or not, and displayed significantly different levels of interest towards each stimulus type. The latency to look towards the loudspeaker was shorter, and the duration of the directed gaze was longer, after the playback that violated the matching expectation in every adult, but not in juveniles which seemingly displayed a random gaze response. Our findings support the conclusion that the matching rule is relevant for adults, but not for socially inexperienced young monkeys which may not have had enough experience of the conversational rules governing vocal exchanges.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Šebesta, Pavel</style></author><author><style face="normal" font="default" size="100%">Kleisner, Karel</style></author><author><style face="normal" font="default" size="100%">Tureček, Petr</style></author><author><style face="normal" font="default" size="100%">Kočnar, Tomáš</style></author><author><style face="normal" font="default" size="100%">Akoko, Robert Mbe</style></author><author><style face="normal" font="default" size="100%">Třebický, Vít</style></author><author><style face="normal" font="default" size="100%">Havlíček, Jan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Voices of Africa: acoustic predictors of human male vocal attractiveness</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">attractiveness</style></keyword><keyword><style  face="normal" font="default" size="100%">body mass index</style></keyword><keyword><style  face="normal" font="default" size="100%">formant</style></keyword><keyword><style  face="normal" font="default" size="100%">fundamental frequency</style></keyword><keyword><style  face="normal" font="default" size="100%">human</style></keyword><keyword><style  face="normal" font="default" size="100%">male</style></keyword><keyword><style  face="normal" font="default" size="100%">sexual selection</style></keyword><keyword><style  face="normal" font="default" size="100%">undernutrition</style></keyword><keyword><style  face="normal" font="default" size="100%">voice</style></keyword><keyword><style  face="normal" font="default" size="100%">y harmonics-to-noise ratio</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-05-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://linkinghub.elsevier.com/retrieve/pii/S0003347217300866</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">127</style></volume><pages><style face="normal" font="default" size="100%">205 - 211</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Robust evidence shows that voice quality affects various social interactions, including mate preferences. Previous research found that male voices perceived as attractive are characterized by low voice pitch, lower or sexually typical formants and relatively high breathiness. These features tend to be seen as markers of an individual&amp;#39;s quality as a potential mate. Although there are considerable differences between languages in vocal parameters that could influence the perceived attractiveness, the above-mentioned findings rely on research based mainly on participants from European or North American countries. In our study, we therefore tested the main acoustic predictors of vocal attractiveness using two male samples from Cameroon and Namibia. Standardized vocal recordings were then assessed for vocal attractiveness by a panel of female raters from the Czech Republic. Our results show that in the Cameroonian voices, fundamental frequency was strongly negatively associated with perceived vocal attractiveness. In the Namibian sample, however, it was not the fundamental frequency but lower mean formants and harmonics-to-noise ratio that were negatively associated with vocal attractiveness. This pattern may be partly attributed to differences in morphological characteristics such as the body mass index, indicating variation across individual populations.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Raine, Jordan</style></author><author><style face="normal" font="default" size="100%">Pisanski, Katarzyna</style></author><author><style face="normal" font="default" size="100%">Reby, David</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Tennis grunts communicate acoustic cues to sex and contest outcome</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">dominance</style></keyword><keyword><style  face="normal" font="default" size="100%">fundamental frequency</style></keyword><keyword><style  face="normal" font="default" size="100%">nonverbal vocalizations</style></keyword><keyword><style  face="normal" font="default" size="100%">pitch</style></keyword><keyword><style  face="normal" font="default" size="100%">tennis grunts</style></keyword><keyword><style  face="normal" font="default" size="100%">vocal communication</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-08-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://linkinghub.elsevier.com/retrieve/pii/S0003347217301975</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">130</style></volume><pages><style face="normal" font="default" size="100%">47 - 55</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Despite their ubiquity in human behaviour, the communicative functions of nonverbal vocalizations remain poorly understood. Here, we analysed the acoustic structure of tennis grunts, nonverbal vocal- izations produced in a competitive context. We predicted that tennis grunts convey information about the vocalizer and context, similar to nonhuman vocal displays. Speci fi cally, we tested whether the fundamental frequency (F0) of tennis grunts conveys static cues to a player&amp;#39;s sex, height, weight, and age, and covaries dynamically with tennis shot type (a proxy of body posture) and the progress and outcome of male and female professional tennis contests. We also performed playback experiments (using natural and resynthesized stimuli) to assess the perceptual relevance of tennis grunts. The F0 of tennis grunts predicted player sex, but not age or body size. Serve grunts had higher F0 than forehand and backhand grunts, grunts produced later in contests had higher F0 than those produced earlier, and grunts produced during contests that players won had a lower F0 than those produced during lost contests. This difference in F0 between losses and wins emerged early in matches, and did not change in magnitude as the match progressed, suggesting a possible role of physiological and/or psychological factors manifesting early or even before matches. Playbacks revealed that listeners use grunt F0 to infer sex and contest outcome. These fi ndings indicate that tennis grunts communicate information about both the vocalizer and contest, consistent with nonhuman mammal vocalizations.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kean, Donna</style></author><author><style face="normal" font="default" size="100%">Tiddi, Barbara</style></author><author><style face="normal" font="default" size="100%">Fahy, Martin</style></author><author><style face="normal" font="default" size="100%">Heistermann, Michael</style></author><author><style face="normal" font="default" size="100%">Schino, Gabriele</style></author><author><style face="normal" font="default" size="100%">Wheeler, Brandon C.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Feeling anxious? The mechanisms of vocal deception in tufted capuchin monkeys</style></title><secondary-title><style face="normal" font="default" size="100%">Animal Behaviour</style></secondary-title><short-title><style face="normal" font="default" size="100%">Animal Behaviour</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">affect</style></keyword><keyword><style  face="normal" font="default" size="100%">alarm calls</style></keyword><keyword><style  face="normal" font="default" size="100%">anxiety</style></keyword><keyword><style  face="normal" font="default" size="100%">deceptive behaviour</style></keyword><keyword><style  face="normal" font="default" size="100%">emotions</style></keyword><keyword><style  face="normal" font="default" size="100%">primates</style></keyword><keyword><style  face="normal" font="default" size="100%">scratching</style></keyword><keyword><style  face="normal" font="default" size="100%">self-directed behaviours</style></keyword><keyword><style  face="normal" font="default" size="100%">vocalizations</style></keyword><keyword><style  face="normal" font="default" size="100%">within-group contest competition</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2017</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-08-2017</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://linkinghub.elsevier.com/retrieve/pii/S0003347217301835</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">130</style></volume><pages><style face="normal" font="default" size="100%">37 - 46</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;An ability to deceive conspeci fi cs is thought to have favoured the evolution of large brains in social animals, but evidence that such behaviours require cognitive complexity is lacking. Tufted capuchin monkeys ( Sapajus spp.) have been documented to use false alarm calls during feeding in a manner that functions to deceive competitors. However, comparative evidence suggests that the production of vo- calizations by nonhuman primates is largely underpinned by emotional mechanisms, calling into question more cognitive interpretations of this behaviour. To determine whether emotional states are plausibly necessary and suf fi cient to proximately explain deceptive alarm call production, we examined the association between self-directed behaviours (SDBs), as a proxy for anxiety, and the production of spontaneous false alarm calls among tufted capuchins. Speci fi cally, we predicted that if anxiety is necessary for the production of false alarms, then individuals that produce spontaneous false alarms should exhibit more SDBs in those contexts in which they call. If anxiety is also suf fi cient to explain the false alarm call production, then we predicted that individuals that call more in a given context would show higher rates of SDBs in that context, and that high rates of calling would be temporally associated with high rates of SDBs. Our results support the contention that states of anxiety are necessary for an individual to spontaneously produce false alarms, but that such states are not suf fi cient to explain patterns of calling. The link between anxiety and deceptive calling thus appears complex, and cognitively based decision-making processes may play some role in call production.&lt;/p&gt;
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Goerlitz, H. R.</style></author><author><style face="normal" font="default" size="100%">Siemers, B. M.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Sensory ecology of prey rustling sounds: acoustical features and their classification by wild Grey Mouse Lemurs</style></title><secondary-title><style face="normal" font="default" size="100%">Functional Ecology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Funct Ecology</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">foraging</style></keyword><keyword><style  face="normal" font="default" size="100%">prey choice</style></keyword><keyword><style  face="normal" font="default" size="100%">prey selection</style></keyword><keyword><style  face="normal" font="default" size="100%">primates</style></keyword><keyword><style  face="normal" font="default" size="100%">size selection</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2007</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-02-2007</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.blackwell-synergy.com/toc/fec/21/1</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">21</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;1. Predatory mammals and birds from several phylogenetic lineages use prey rustling sounds to detect and locate prey. However, it is not known whether these rustling sounds convey information about the prey, such as its size or profitability, and whether predators use them to classify prey accordingly.&lt;/p&gt;
&lt;p&gt;2. We recorded rustling sounds of insects in Madagascar walking on natural substrate and show a clear correlation between insect mass and several acoustic parameters.&lt;/p&gt;
&lt;p&gt;3. In subsequent behavioural experiments in the field, we determined whether nocturnal animals, when foraging for insects, evaluate these parameters to classify their prey. We used field-experienced Grey Mouse Lemurs Microcebus murinus in short-term captivity. Mouse Lemurs are generally regarded as a good model for the most ancestral primate condition. They use multimodal sensorial information to find food (mainly fruit, gum, insect secretions and arthropods) in nightly forest. Acoustic cues play a role in detection of insect prey.&lt;/p&gt;
&lt;p&gt;4. When presented with two simultaneous playbacks of rustling sounds, lemurs spontaneously chose the one higher above their hearing threshold, i.e. they used the rustling sound&amp;#39;s amplitude for classification. We were not able, despite attempts in a reinforced paradigm, to persuade lemurs to use cues other than amplitude, e.g. frequency cues, for prey discrimination.&lt;/p&gt;
&lt;p&gt;5. Our data suggests that Mouse Lemurs, when foraging for insects, use the mass&amp;ndash;amplitude correlation of prey-generated rustling sounds to evaluate the average mass of insects and to guide their foraging decisions.&lt;/p&gt;
</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Seiler, Melanie</style></author><author><style face="normal" font="default" size="100%">Schwitzer, Christoph</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Call Repertoire of the Sahamalaza Sportive Lemur, Lepilemur sahamalazensis</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Primatology</style></secondary-title><short-title><style face="normal" font="default" size="100%">Int J Primatol</style></short-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">bioacoustics</style></keyword><keyword><style  face="normal" font="default" size="100%">Call function</style></keyword><keyword><style  face="normal" font="default" size="100%">call type</style></keyword><keyword><style  face="normal" font="default" size="100%">communication</style></keyword><keyword><style  face="normal" font="default" size="100%">Lepilemur</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Jan-06-2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/10.1007/s10764-015-9846-0</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">36</style></volume><pages><style face="normal" font="default" size="100%">647 - 665</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The acoustic structure of primate loud calls can be used as a powerful, inexpensive, and noninvasive tool for intra- and interspecific comparative analyses, reconstruction of phylogeny, and primate surveys. Despite the range of possibilities offered by acoustic analysis, only few studies so far have focused on quantitative descriptions of the acoustic structure of primate loud call repertoires. Here we aimed to assess the vocal repertoire of the solitary Sahamalaza sportive lemur, Lepilemur sahamalazensis, and to investigate potential communication functions. We recorded every sportive lemur vocalization we heard during 1000 h of nocturnal observations of eight collared individuals, as well as opportunistic searches in the Ankarafa Forest, Sahamalaza Peninsula in northwest Madagascar. In addition, we used playback experiments with four call types to clarify call function. We measured both temporal and spectral properties to describe calls quantitatively and used cross-validated discriminant function analysis to validate call types that we identified from a preliminary qualitative inspection of the spectrograms of 107 calls. We identified six distinct loud call types with the possibility of a seventh call type, with six loud call types similar to those of Lepilemur edwardsi and two loud call types similar to those of four other sportive lemur species. The described call types most likely function in mate advertisement, offspring care, and territorial defense. Future studies of loud calling of the Sahamalaza sportive lemur are needed to clarify if certain call types are sex specific and if loud calls could be used for recognition of individuals to enable noninvasive density measurements and species monitoring.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record></records></xml>