top of page
11841180046_c7a1a75bea_o_edited_edited.jpg

How does our brain turn what falls upon our eyes into the rich, meaningful experience that we perceive in the world around us?

The Laboratory of Cognitive Neurodynamics uses intracranial recordings (intracranial EEG; iEEG) in human patients undergoing clinical treatment in conjunction with multivariate machine learning methods, network analysis, and advanced signal processing techniques to understand brain network dynamics. We examine how information flows through brain networks responsible for visual perception. We are particularly interested in how the brain processes faces, bodies, objects, words, and social and affective visual images during natural behavior in the real-world. Combining these recording and analysis techniques allows us to ask fine-grained questions about neural information processing and information flow at both the scale of local brain regions and broadly distributed networks. Our specific research interests falls into five broad topics:

Real-World Vision

A central goal for neuroscience is to understand how our brains process information during natural behavior in the real world. Important discoveries have come from laboratory paradigms that monitor brain activity while participants view a screen under tight experimental constraints. However, the fundamental question of how our brains processes visual information during free, natural behavior remains unanswered. We combine a natural social vision paradigm that leverages cutting edge developments in computer vision and eye tracking with iEEG recordings to understand how the brain processes visual information during natural interactions. Put another way, we seek to understand how your brain codes for your daughter, including her expressions, gestures, and movements, during natural behavior, such playing snakes and ladders together.

RWV

In this video, we see the world through the eyes of our iEEG research participants by utilizing eye-tracking glasses. The right side shows corresponding iEEG data in real time

realworldvision.JPG

Figure:  A participant in the UPMC Epilepsy Monitoring Unit implanted with iEEG electrodes, secured with bandaging, and fitted with SensoMotoric Instrument’s (SMI) ETG 2 Eye Tracking Glasses that have been modified with an ergonomic Velcro strap. b) An over-the-shoulder view of the participant and the visual scene during an interaction with a researcher. c) Front (top) and back (bottom) view of the SMI ETG 2 Eye Tracking Glasses with the egocentric video camera (green circle) and inward facing eye-tracking cameras (red ellipses). d) A snapshot of the participant’s view (top) through the SMI ETG 2 Eye Tracking Glasses corresponding to b), and their eye movement (bottom) captured by the inward facing eye-tracking cameras. Alreja et. al

ADILOHB

A Day in the Life of the Human Brain

Whether we are fatigued or energetic, anxious or happy, our mood, cognition, and related neurocognitive processes fluctuate over minutes-to-hours. Yet our understanding of human brain function mostly comes from highly controlled experiments over milliseconds-to-seconds, or longitudinal studies over months-to-years. We have begun leveraging the rare opportunity to understand brain network dynamics at the timescale of minutes-to-hours-to-days during natural behavior in the real-world using iEEG recordings in individuals undergoing surgical treatment for epilepsy. Combining recurrent neural networks and dynamical systems learning, we find ways our brains use rapid, chaotic transitions that coalesce into neurocognitive states slowly fluctuating around a stabilizing central equilibrium to balance flexibility and stability during real-world behavior. We leverage these electrophysiological methods with rich wearable and external measures to understand the interrelationship between the brain, cognition, behavior, and physiology over minutes-to-days in the human brain

dayinthebrain.JPG

Figure: Neural dynamics undergoes bursts of high transitory speed when natural behavior shifts.

(A-left) Network activations plotted over the week for one participant where each color represents the activity of a different network (sum of activity normalized to one for visualization purposes) along with how quickly the brain changed network activations every five seconds. (A-right) The average time between windows with the top 10% of transition speed across all participants is shown in blue vs the expected time if windows of high speed occurred via homogenous Poisson process (λ=0.1). Wang et. al 

Social and Affective Perception

SAAP

We use information from a person's face, body, and motions to assess their actions, intentions, and emotional state. To determine how the brain processes this information to understand social information, we use iEEG recordings from the brain circuits involved in social and affective in individuals undergoing surgical treatment for epilepsy. We have found that different subregions of the face processing system are involved in different stages of facial expression processing with distinct dynamics; we illustrated overlap between face identity and expression representations, and have examined the interaction between face and body perception. We also collaborate with Dr. Taylor Abel’s lab, which studies voice perception, and have begun examining how we combined visual, auditory, and linguistic information for social perception.

Screenshot 2024-05-08 130340.jpg

Examples of body-to-face identity adaptation stimuli, including face morphs, gender specific bodies, and two adaptation trial sequences. (Ghuman et al., 2010)

​

Neurodynamics of Face, Object, and Word Recognition

NDOFOWR

Our remarkable capacity to recognize a large variety of faces, words, and objects is critical to our ability to interact efficiently with the world around us. This capacity arises from regions of the temporal, occipital, and parietal lobes tuned to particular visual categories, such as faces, words, and bodies. We aim to understand the dynamic information processing stages that these brain networks use to recognize visual input. To accomplish this aim, we use iEEG recordings in conjunction with multivariate machine learning methods to decode the information being processed in these brain networks on a millisecond-by-millisecond basis. By understanding how the neural code related to different aspects of visual information changes in time, we can assess how the brain dynamically goes through different levels of representation. For example, we found that single brain areas rapidly form a coarse level representation after seeing an image and through interactive processing evolve to a finer level representation later. Furthermore, we examined the kinds of distortions that occur when brain areas involved in visual recognition are perturbed with electrical stimulation.

Recorded Experiment: Letter misperception with stimulation to the left mid-fusiform

All individuals featured in footage provided written consent to public presentation of these materials

Methods for Understanding Information Processing Neurodynamics

methods

 

How do brain areas talk to one another? What code do they use? What information do they exchange? How does the structure of the brain influence how brain regions communicate? We use and are developing neural synchrony measures, multimodal network analyses, and multivariate decoding procedures to better understand the principles of neural communication and information exchange. We are developing novel multivariate machine learning approaches that will allow us to assess the information encoded in neural interactions.

multipmode.jpg
bottom of page