Download: |
A high-dimensional model of the neural representational space for face and object recognition
Prof. James V. Haxby Visual cortical fields can be modeled as high-dimensional representational spaces. We model these spaces using a new algorithm, hyperalignment, that affords transformation of individual brains' voxel spaces into common, high-dimensional model spaces. Projecting individual data into common model spaces affords between-subject multivariate pattern (MVP) classification of fine distinctions among brain responses to faces, animals, and objects that equals or exceeds within-subject classification. Data in common model space coordinates can be projected back into the cortical topography of an individual's brain using the transpose of the transformation matrix for that subject. Building models with general validity across stimuli and across experiments requires broad sampling of visual stimuli, which we demonstrate using responses measured while subjects watch a full-length action movie. Models based on responses to still images from a moderate number of categories do not have general validity. Thus, category perception experiments do not provide adequate data for modeling the representational space in ventral temporal visual cortex. Category-selective regions are preserved in the model as single dimensions that reflect the relevant category contrast. The topographies associated with these dimensions agree well with the boundaries of individual-specific category-selective regions. The high-dimensional model, however, also shows that these regions have finer-scale topographies within them that afford fine distinctions among stimulus representations that are not accounted for by models based on category-selective regions. Thus, although category-selective regions are significant features of the high-dimensional model, they are only subspaces that provide an insufficient basis for models of visual stimulus representation in ventral temporal visual cortex.
|
|