The last few days at SfN came at me, and maybe you, from all angles. It has been a deluge of information and insight into some amazing work. One ‘theme’ covered the mechanisms of learning from both biological and computational perspectives. Lets take a look at presentations from both vantages.
Presenting in the session ‘From salient experience to learning and memory’, Dr. Andreas Lüthi examined how an association between a conditioned stimulus (CS) and an unconditioned stimulus (US) is formed by amygdalar neuronal circuits. Understanding the process of forming associations may help prevent or treat anxiety disorders, where a stimulus that may not be concerning is causing debilitating anxiety.
Principal cells in the Basolateral region of the amygdala rarely fire. Lüthi believes strong inhibition from surrounding interneurons limit principal cells from firing. They visualized the activity of interneurons during an associative learning task to understand the role of inhibition. These experiments found that VIP interneurons initially respond to the unconditioned stimulus and that subsequent unconditioned stimuli cause a decrease in VIP cell activity. This decrease in inhibitory signal correlates with an increasing in freezing, a common behavioral output measure. They also found that inhibiting VIP activity during associative learning reduced freezing during memory retrieval later. Therefore to properly form the association VIP interneurons need to fire initially to facilitate the formation of increased activity in the principal neurons, then decrease in firing to allow the principal neurons to help trigger the behavioral output through circuits projecting from the amygdala.
Moving more broadly, general computational methods for learning are being investigated at Google’s DeepMind, run by Dr. Demis Hassabis. During a lecture he examined the recent interplay between neuroscience and artificial intelligence.
The work at DeepMind has trained neural networks on the game Go, and others, to beat human champions. The learning of difficult games to such an extreme level has taught Go players new strategies and Hassabis argues this level of learning will broaden our collective intelligence rather than undermine human capacity.
The company has additionally started to develop neural networks that when given 2-3 2D views of an environment can generate the entire 3D space. Right now this works for simplistic spaces but the ability to generate scene’s he hopes will teach us about our own ability to imagine. And together these computer algorithms will inform us about the human experience of learning.
These two lectures explored learning but in very different ways. The biological approach informs us on our natural algorithms and their circuitry. The computational explores learning tasks we do naturally but with circuitry which may not reflect biology. From this we can compare and contrast. Even more interesting is that the computational learning algorithms are able to advance our knowledge, as observed in the strategies with Go. This perhaps navigates us around circuit ‘blocks’ in our own learning systems to a better understanding of the natural world.
Patrick E. Steadman, MSc
PhD Candidate, Frankland Lab, The Hospital for Sick Children
MD-PhD Student, University of Toronto