Research

How does the brain transform sensory input into 3D models of the body?

Sensorimotor control relies on the ability of our brain to accurately model the dimensions of the body. These models are constructed by integrating sensory input with prior geometric knowledge of the body. We are interested in the neurocomputational underpinnings of this process, with a particular focus on their role in somatosensory space and localization. We have recently begun exploring the idea that the brain uses similar computations as global positioning systems to localize objects on Earth. We aim to characterize the nature of the sensorimotor system's Body Positioning System.

How do these models update when technologically extending the body ?

Humans often use tools to expand the ways that they can act on and shape their environment. Doing so requires the brain to adapts its models of body to account for changes in limb geometry and dynamics. Our lab is interested in to what extent technological extensions (e.g., tools, exoskeletons, etc.) become fused with spatial representations of the body.  We are currently developing methods to identify computational signatures of this body-tool fusion in behavioral and neural data.

In what ways can a tool be used like an extended somatosensory 'organ'?

Humans can sense their surroundings through a tool; a blind person with their cane is a classic example. We have found that humans can localize touch on a tool as accurately as they can a body part. As they are not obviously innervated, this means that the brain has the ability to extract spatial information from the tool's dynamics. Tools therefore seem more in-line with a mechanical sensory organ (e.g., a rodent's whisker) than they do an inert external object. We are ultimately interested to what extent we can say that they are a form of mechanical somatosensory organ. Our recent work specifically investigates whether the brain repurposes spatial computations that localize touch on the body to localize touch on a tool.