Humans perceive the world in rich visual detail. In just a fraction of a second, we not only detect the objects and people in our environment, but also quickly recognize people’s emotions, goals, actions, and social interactions. Detecting these higher level properties is extremely challenging even for state-of-the-art computer vision systems. How do humans extract all of this complex information with such speed and ease?
Our research aims to answer this question using a combination of human neuroimaging, intracranial recordings, behavior, and machine learning/artificial intelligence. Some specific research questions include:
What is the neural basis of social interaction perception?
What aspects of our rich visual experience are based on fast perceptual processing versus slower cognitive reasoning?
What are the neural computations underlying invariant visual recognition?