GT Neuro Seminar Series - Kavli Brain Forum

"Using Artificial-intelligence-driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization"

Daniel Yamins, Ph.D.
Assistant Professor
Stanford University

Human behavior is founded on the ability to identify meaningful entities in complex noisy data streams that constantly bombard the senses. For example, in vision, retinal input is transformed into rich object-based scenes; in audition, sound waves are transformed into words and sentences. In this talk, I will describe my work using computational models to help uncover how sensory cortex accomplishes these enormous computational feats. The core observation underlying my work is that optimizing neural networks to solve challenging real-world artificial intelligence (AI) tasks can yield predictive models of the cortical neurons that support these tasks. I will first describe how we leveraged recent advances in AI to train a neural network that approaches human-level performance on a challenging visual object recognition task. Critically, even though this network was not explicitly fit to neural data, it is nonetheless predictive of neural response patterns of neurons in multiple areas of the visual pathway, including higher cortical areas that have long resisted modeling attempts. Intriguingly, an analogous approach turns out be helpful for studying audition, where we recently found that neural networks optimized for word recognition and speaker identification tasks naturally predict responses in human auditory cortex to a wide spectrum of natural sound stimuli, and help differentiate poorly understood non-primary auditory cortical regions. Together, these findings suggest the beginnings of a general approach to understanding sensory processing the brain.

"Using Artificial-intelligence-driven Deep Neural Networks to Uncover Principles of Brain Representation and Organization" - Daniel Yamins, Stanford University