Squawks, grunts, and speech: vocal communication across the animal kingdom

sfn17
theme_d
theme_f

#1

Squawks, grunts, and speech: vocal communication across the animal kingdom at Birdsong 7

a re-cap of the Birdsong Satellite Meeting @ SfN17

Before SfN officially kicked off on Saturday, a group of researchers flocked to the U Maryland College Park campus to talk about…well, talking. Birdsong 7 brought together a group of (mainly) songbird researchers interested in how animals learn, process, and generate complex communication signals.





As I noted in my first post, vocal learning is extremely rare in vertebrates. Aside from humans, only a handful of other species must learn their species-specific vocalizations including 3 orders of birds: songbirds, parrots, and hummingbirds (the only other mammals besides humans are pinnipeds, cetaceans :dolphin:, and bats :bat:)

While vocal learning is rare, vocal communication is pervasive in the animal kingdom, best exemplified by my favorite childhood toy, the humble See n’ Say:
image

Which brings us to Birdsong 7. Organized and attended primarily by fellow bird-nerds, the pre-SfN kick-off meeting brings in a wide-variety of researches from all “model organisms” who are all generally interested in how animals communicate.

Below is a quick recap of some of the (many!) great talks:

:one: Robert Seyfarth - “Flexible usage and the social function of primate vocalizations”

Seyfarth kicked off the event talking about non-human primate vocal communication. While our closest primate ancestors aren’t considered vocal learners (although there are some hints of it as of recently), infant vervet monkeys do learn their predator alarm calls.

As primate calls are rather simple and mostly innate, Seyfarth focused on the function of unlearned grunts in baboons, which sound exactly like this :point_down:
image


from the Cheney & Seyfarth lab website

Grunts, it turns out, convey individual baboon identity and keeps the peace among the pack. When individuals approach, at least one of them grunts, which tends to lead to more “friendly” interactions and diffuses a tense and possibly aggressive situation.

“Grunts grease the social wheels” -Robert Seyfarth

Seyfarth found that if he played back the aggressive baboon’s grunt during an aggressive social interaction, it would mollify the baboon, preventing any altercations (interestingly, the same effect was found if he played back grunts from the aggressive baboon’s family, too…no effect with a non-related baboon though, so there’s something special and ‘dialect’-like about family grunt identity).

He went on to describe similar ideas in other bonobo vocalizations (high n’ low “hoots”), that ultimately converged on idea that while non-human primates will never ascend to Pavarotti’s vocal range, they’ve compensated for their constrained acoustic repetoire by ramping up vocal combinations to convey more complicated information.

:two: Robert Liu - “Experience-dependent biasing of behaviorally relevant sounds.”

Phystist-turned-neuroscientist Robert Liu gave a fantastic talk about how motherhood enables auditory plasticity in rodent cortex. Rodents, such as mice, emit ultrasonic vocalizations (USVs) to court a mate, or in the case of pups, to call for their mom:


Mouse courtship “song” (USVs) via The Guardian

Overall, Liu found that maternal experience reduces the proportion of neurons that respond to either the onset of offset of pup USVs in auditory cortex. What drives this shift? Liu has some ideas, including some developing evidence suggesting a combined role of steroid hormones-dependent growth factor production that leads to an enhanced selectivity for pup USVs through disinhibition.

More recently, his work has demonstrated a convincing role for the auditory cortex’s role in memory consolidation, possibly through the formation of extracellular matrices (e.g. perineuronal nets), which crystallizes visual circuit plasticity early in life in rodents.

:three: Yoko Yazaki-Sugiyama - “Listening to the sound of silence: Neuronal coding of species identity in the zebra finch songs.”

Yoko’s talk highlighted the key idea of nature vs nurture in vocal learning. By cross-fostering baby zebra finches (ZF’s) with Bengalese finches (BF’s), she was able to tease apart what features birds are able to learn to imitate (‘nurture’) vs innate predispositions/constrains to features unable to imitate (‘nature’; genetic predispositions).

Focusing primarily on her recent paper’s findings, her group found that cross-fostered ZF’s are great at copying syllable morphology (syllable acoustics) but are unable to reproduce the prosody of BF song (temporal coding; gap duration between syllables):

Following up on the song learning data, she described how auditory regions responded to ZF vs BF song. It turns out that Field L3, a region similar to auditory cortex in mammals, cares a lot of the silent gap between sounds. When birds heard artificial sounds (white noise spaced out like actual song) arranged with ZF-like syllable gaps, a group of neurons responded just as well as to noise as it did to natural song.

All in all, she provided compelling evidence that silent gaps are innately coded to retain species identity. As ZFs live in large flocks in the wild among many other bird species, perhaps this explains how a ZF doesn’t miss a beat and learns its proper song.

:four: Mimi Kao - “Mechanisms for variability and plasticity in vocal sequences: contributions of the AFP?”

Mimi’s work is focused on the anterior forebrain pathway, similar to the basal ganglia thalamocortical loop in humans:

For birds, LMAN is an important brain involved in patterning vocal exploration/variability. Or phrased another way, ‘babbling’. When birds set out to imitate the memoriry of their father’s song, it’s pretty bad (and variable) at first:


From Ölveczky et al 2005 PLoS Bio

However, when inhibitory tone in LMAN is increased with a GABA agonist (muscimol), song becomes pre-maturely stereotyped (approximating what it should be like, in terms of consistency, in adulthood):




For a while, it was thought that after adulthood, LMAN doesn’t affect song production. However, recent data from Mimi’s fledgling lab at Tufts finds that shutting down inhibition in the LMAN of adults leads to progressive degradation of song.

On the first day of inactivation, birds already begin to show subtle changes in their song, and from neural recordings, also ramp up firing rates in LMAN (disinhibition).

After a week of treatment, a previously highly stereotyped song becomes obviously disrupted - the syntax (order of syllables) is rearranged, they start stuttering, and they drop entire syllables! And by about a week and half, their adult song is transformed to that of an juvenile learning how to squawk. Song snaps back if the treatment is swapped with muscimol, or the bird takes a three week break from the bicuculine.

This impressive ability to revert a highly stereotyped motor sequence to that of a juvenile presents a fascinating new ability for researchers to explore how neural circuits and behavior become both crystallized and how to possibly re-open developmental plasticity in adulthood.

Check out Mimi’s awesome TEDx talk circa 2013 for an engaging overview of her research:

Many thanks to Greg Ball, David Vicario, and Luke Remage-Healey for organizing such a fantastic set of talks.


Follow me on Twitter :bird:


First-rate first looks at behavior + neural circuits of 'non-traditional' species @ SfN17
#3