Third party funded individual grant
Acronym: MultiCEA
Start date : 01.02.2026
End date : 31.01.2029
Human perception excels at creating coherent, adaptive multisensory representations from multifaceted sensory inputs. The brain must integrate stimuli across senses—like a speaker's voice and face at a noisy party—if they share a common cause, while keeping independent sources separate. It achieves this by inferring the causal structure (one source vs. multiple) using spatiotemporal cues and prior expectations.
Traditional research treated causal inference as a static "one-shot" process with brief inputs and fixed outputs. However, real-world environments are dynamic, demanding a time-resolved model. The project proposes intertwining two processes: (1) causal evidence accumulation, where the brain samples sensory data over time to reduce uncertainty about causes; and (2) competition between rival hypotheses (e.g., "common cause" vs. "independent"), resolved via prefrontal conflict monitoring and regulation. These continue until a stable percept forms.
To test this neurocognitive framework, the study combines audiovisual psychophysics, computational modeling, and EEG/fMRI analyses in three work packages (WPs). A core CI-DDM model integrates Bayesian causal inference with drift-diffusion modeling to capture decisions, reaction times, and perceptual integration (WP1). EEG (WP2) and fMRI (WP3) reveal spatiotemporal neural dynamics across cortical hierarchies, including prefrontal resolution of hypothesis competition.
The findings will advance understanding of multisensory causal inference in dynamic settings, with broad implications for cognitive neuroscience.
Human perception is remarkably successful at obtaining coherent, adaptive, multisensory representations from extremely complex sensory environments. For veridical representations, our brains must integrate stimuli across modalities if they originate from a common cause, but not integrate crossmodal stimuli from independent causes. For example, at a crowded party, our brains can efficiently integrate one out of many voices with the face of a speaker, and rarely misidentify who is speaking. Recent evidence from us and others suggest that the brain infers the causal structure of multisensory stimuli (i.e., common vs. independent causes) by combining causal evidence from spatiotemporal relations between stimuli with prior assumptions. To date, most research has focused on causal inference as a static ‘one-shot’ process, with brief sensory inputs and a fixed causal inference output. However, understanding multisensory perception in complex environments requires a time-resolved framework: In an ever-changing sensory landscape, the brain must dynamically exploit intersensory relations to accumulate causal evidence over time to make fast and accurate causal decisions. We propose to develop such a dynamic framework of multisensory causal inference by intertwining two time-evolving processes: First, a causal evidence accumulation process whereby the brain samples sensory information to reduce uncertainty about the underlying causal structure of the stimuli. Second, a competitive process whereby alternative (and incompatible) hypotheses about the stimuli’s causal structure are entertained during causal evidence accumulation. To resolve the competition, the brain recruits prefrontal conflict monitoring and regulation processes. Accumulation and competition continue until causal uncertainty is sufficiently reduced, and a multisensory percept has stabilised.
To investigate this neurocognitive model, the project combines audiovisual psychophysical experiments with computational modelling and uni- as well as multivariate pattern analysis of EEG and fMRI data from healthy human participants in three work packages (WPs). Using an audiovisual rate paradigm, we will characterise how the brain accumulates causal evidence by integrating Bayesian causal inference with a drift-diffusion model (CI-DDM). In a psychophysical study (WP1), the CI-DDM models observers’ causal decisions and their reaction times as well as their audiovisual perceptual integration. Applied to EEG (WP2) and fMRI data (WP3), the CI-DDM model charts the neural spatiotemporal dynamics of causal evidence accumulation and causal inference throughout the cortical hierarchies. Further, we will investigate how prefrontal conflict monitoring and regulation resolve the competition between causal hypotheses.
The results of the project will extent and generalise our current understanding of how humans perform causal inference for multisensory perception in dynamic, complex environments.