Sensorimotor control relies on the ability of our brain to accurately model the dimensions of the body. These models are constructed by integrating sensory input with prior geometric knowledge of the body. We are interested in the neurocomputational underpinnings of this process, with a particular focus on their role in somatosensory space and localization. We have recently begun exploring the idea that the brain uses similar computations as global positioning systems to localize objects on Earth. We aim to characterize the nature of the sensorimotor system's Body Positioning System.
Humans often use tools to expand the ways that they can act on and shape their environment. Doing so requires the brain to adapts its models of body to account for changes in limb geometry and dynamics. Our lab is interested in to what extent technological extensions (e.g., tools, exoskeletons, etc.) become fused with spatial representations of the body. We are currently developing methods to identify computational signatures of this body-tool fusion in behavioral and neural data.
Humans can sense their surroundings through a tool; a blind person with their cane is a classic example. We have found that humans can localize touch on a tool as accurately as they can a body part. As they are not obviously innervated, this means that the brain has the ability to extract spatial information from the tool's dynamics. Tools therefore seem more in-line with a mechanical sensory organ (e.g., a rodent's whisker) than they do an inert external object. We are ultimately interested to what extent we can say that they are a form of mechanical somatosensory organ. Our recent work specifically investigates whether the brain repurposes spatial computations that localize touch on the body to localize touch on a tool.
As stimuli move closer to the body, their value increases. A jar of cookies is far easier to resist when it is across the room than when it sits within arm’s reach; a dog poses little concern at a distance, but becomes an immediate threat the moment it charges toward you. Classic monkey electrophysiology studies show that parieto-frontal regions scale their neural activity as a function of proximity to the body, often called peripersonal space. We aim to link the concept of peripersonal space to theories of reinforcement learning systems in subcortex (e.g., striatum), investigating whether the proximity of stimuli to the body determines the specific learning systems that are engaged. We hypothesize that when stimuli enter peri-personal space the brain shifts to a fast-learning mode optimized for learning from rapid change. In contrast, distant events are learned about more slowly, through the accumulation of average outcomes.