Loading Events
  • This event has passed.

Designing salient, naturalistic “super-stimuli” with deep generative models

February 2 @ 4:00 pm - 5:00 pm

Attention can be deployed either voluntarily – based on task goals – or captured automatically – by salient sensory stimuli. Previous studies have controlled stimulus salience by altering low-level image features (e.g., luminance, or popout), or by inducing motion dynamics (e.g., flash, loom); such salient stimuli capture attention automatically by driving strong neural responses in multiple visual areas. Yet, precisely what combinations of naturalistic (high-level) stimulus features drive the strongest neural responses, and produce the highest behavioral salience, remains a topic of active research. As a first step toward this goal, I will describe the design of salient “super-stimuli” – high-resolution, naturalistic images tailor-made to evoke the strongest responses in specific brain areas. The high dimensionality of natural images renders this optimization prohibitively challenging at the pixel level. To tackle this challenge, we extend a recently developed framework called XDream . This framework employs a deep generative network in combination with a heuristic optimization (genetic) algorithm and was recently tested to generate “super-stimuli” for the monkey visual cortex. We extend this framework with a CNN-based encoder for human functional MRI (fMRI) brain responses and design novel classes of “super-stimuli” optimized for the human brain. Specifically, we advance the optimization algorithm to address the following questions: i) Given an object category (e.g., telephones or computers) can we design a class of super-stimuli that still respects category boundaries? ii) Can we design “chimeric” super-stimuli that can combinatorially activate (or suppress) multiple brain regions (e.g., primary visual cortex and face area), at once? I will conclude by describing ongoing work that seeks to validate the generated images by measuring human visual cortex responses directly with fMRI and quantifying the behavioral salience of the generated images with psychophysical experiments. Speaker(s): Prof. Sridhar Devarajan, Virtual: https://events.vtools.ieee.org/m/403671