Jump to content

The Allure of Computational Neuroscience – a novel approach to understanding neural dynamics


Jayalakshmi Viswanathan

Recommended Posts

Jayalakshmi Viswanathan

Dr. Kanaka Rajan is a Computational Neuroscientist and Associate Professor at the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai in New York. Her research is focused on using computational techniques to disentangle the dynamic (as in time-varying) aspects of cognition. In her meet-the-expert SfN lecture on 21 August, 2024, Dr. Rajan highlighted the insights that can be gained from the application of innovative computational neuroscience techniques to large scale neural data.

In her talk, Dr. Rajan first introduced attendees to the dynamic nature of cognition – how ongoing, time-varying neural activity results in complex phenomena like “Heidi the Octopus” dreaming or thinking about a crab. In order to explain such complex behaviors, neural network models need to have computational properties beyond those seen in deep-layer architectures based on feed forward networks that are able to perform some activities like image recognition. Driven, nonlinear, Recurrent Neural Networks (RNNs) with random wiring however results in these networks developing “brain-like” features like ongoing activity, reliable responses, and the ability to learn specific tasks based on training. After developing this model during her doctoral dissertation, Dr. Rajan started focusing on developing models that are constrained by real-world data, especially large-scale neural data from simultaneous recordings from hundreds of neurons. Using principles of dimensionality reduction by projecting data onto coordinate axes, these complex neural data were smoothed to “neural manifolds” which allowed the visualization of these data. Taking it a step further, Dr. Rajan and her team focused on parsing out the input-output relationships in these data. Given the response of a given neuron, its inputs range from nearby neurons, neurons outside of the recording areas, internal inputs from “ongoing” activities, as well as external inputs due to the experimental stimuli. By building data constrained multi-region models that were trained directly on neural data, using novel analytical techniques, and inferring mechanisms inaccessible from measurements alone, Dr. Rajan’s team were able to produce realistic neural dynamics in silico as well as  analyze the sparse functional connectivity between “brain regions” of these models. In other words, this approach allowed the team to decompose recorded neural outputs into source currents/inputs. The success of this approach was also demonstrated through the whole-brain recording data from a zebrafish performing a stress task. Using a 3-region model built using this approach allowed the exploration of temporal effects of the stress task on the activity of various regions, and how activity in different regions feed into other regions. Based on these, a compositional view of neural modularity was proposed, and the results from the stress task provided insights into how ongoing neural activities in each region affect each other as well as the consequences of perturbing this ongoing activity with behavioral or sensory inputs.

As a neuroscientist and an engineer, I wanted to tune in to this SfN webinar to understand the progress in this rich area of philosophical and scientific inquiry. This lecture delivered on both levels. The lecture spanned many aspects of computational neuroscience – the history and milestones in development of various deep architecture models, the principles of decoding dynamic process, etc., and the discussions during the Q&A further expanded it. I would recommend this lecture to all neuroscientists with an interest in computational neuroscience and artificial intelligence. The model and more information about Dr. Rajan and her team’s work can be found at: github.com/rajanlab/CURBD

  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in

×
×
  • Create New...