Skip to content

Funded Grants

Understanding early visual codes and their relative contribution to rapid perceptual inference

Grantee: Colgate University

Grant Details

Project Lead Bruce C. Hansen Ph.D.
Amount $600,000
Year Awarded
Duration 8 years
DOI https://doi.org/10.37717/220020430
Summary

Our everyday visual experiences typically yield a sense of ground truth in that we believe we are operating directly from external information. Despite such a belief, a significant number of our decisions and actions in visual environments depend exclusively on perceptual inferences derived from internalized representations of external information. Put another way, many of our decisions and subsequent actions are the direct result of our brains making “guesses” based on “fabricated” information. Remarkably, the brain’s strategy for deriving “meaning” from guesses based on fabrication is not only highly accurate, but is also extremely efficient. That is, high levels of decision accuracy are achieved with visual information sampled within less time than it takes to blink your eyes. Exactly how the brain accomplishes rapid perceptual inference remains elusive. My research program is therefore focused on providing answers to this question through an understanding of how early low-level visual signals provide distinct codes for different visual environments, and how those codes contribute to the formation of rapid perceptual inference.

The primary domain within which my research program explores rapid perceptual inference is visual scene categorization (e.g., identifying a scene at the basic or superordinate level). Scene categorization makes an ideal vessel to explore rapid perceptual inference because: 1) it is exceedingly fast and efficient (i.e., it can be achieved after viewing a scene for as little as 1/100th of a second), 2) it is known to precede identification of the constituent objects of the scenes themselves, and 3) it is known to serve as the guiding framework for directing the deployment of overt attention. Given the extreme efficiency of this categorization process, my collaborators and I have argued that it might be the internalized low-level structural attributes of scenes that serve to guide rapid categorization – an argument that has fueled an on-going interdisciplinary investigation within my lab over the last several years. To date, that work has shown that different visual scenes possess distinct structural regularities that are actually used by humans during rapid scene categorization. Further, visual evoked potential (VEP) work from my lab has shown that those same structural regularities are faithfully encoded at the earliest stage of cortical visual processing.

In light of my ongoing research, I have built a working theory that it is the early visual cortical signals themselves that define the boundaries between scene categories. That is, when rapid categorization is necessary, such signals may actually constitute the categorical representation itself (as opposed to neural processes signaling for their conceptual attributes). With support from the James S. McDonnell Foundation, I plan to launch a broad-scale investigation that will test this notion through behavioral and neuroelectric paradigms. I also plan to examine the relative contribution of early visual signals to other domains involving rapid comprehension and decision-making.