0359

Reconstructing multi-dimensional face attributes from fMRI signals
Hui Zhang1, Zixiang Wei2, Xueping Wang2, and Yunhong Wang3

1Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC), Beihang University, Beijing, China, 2School of Computer Science and Engineering, Beihang University, Beijing, China, 3State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science and Engineering, Beihang University, Beijing, China

Synopsis

We develop a new framework that can reconstruct the perceived faces from functional MRI signals. Inspired by the psychological evidences that face processing is transmitted along two distinct neuroanatomical visual pathways in human brain, our framework can efficiently extract the multidimensional information of facial expression and facial identity information from functional MRI signals for precise face image reconstruction.

INTRODUCTION

Reconstructing perceived faces from neural signals has become promising work recently1-4. However, researchers face two challenges: 1) unlike the common objects in nature, faces show multiple attributes, such that expressions, identities, genders. Accurate reconstructions of these face attributes are difficult; 2) the neural representations of face attributes are complex in the brain. It is challenging to make full use of these neural representations for precise face image reconstruction. Inspired by the psychological evidences5, 6 that facial expression and identity are processed in distinct neural pathways of dorsal and ventral stream in human temporal lobe, we propose a new reconstruction framework. Our framework can efficiently extract the multidimensional facial attributes from functional MRI signals for better face image reconstruction than traditional methods.

METHODS

The new framework

Our framework are designed to set up three relationships between the face images and their corresponding neural responses in the localized brain regions of interest (ROI), according to the different aspects of the face attributes.

For setting up the first relationship, we represent each face image as a single vector, and perform principal component analysis (PCA) on the face images to span an eigen-face space. Accordingly, we extract the neural signals for the face image stimuli from all predefined face-selective ROIs, as well as V1, and perform PCA to span an eigen neural response space. This is for general face image reconstruction.

For setting up the second relationship, we re-label the face images according to their facial expression categories, and perform PCA to span an ‘eigen expression’ face space. We also extract the neural signals for the face image stimuli from posterior STS8 and dorsal amygdala7, and perform PCA to span an ‘eigen expression’ neural response space. This is for facial expression reconstruction.

For setting up the third relationship, we re-label the face images according to their facial identity categories, and perform PCA to span an ‘eigen identity’ face space. Accordingly, we extract the neural signals from FFA, OFA8 and aIT9, and perform PCA to span an ‘eigen identity’ neural response space. This is for facial identity reconstruction.

For each relationship, we use linear transformation to project the eigen scores from neural response spaces to image space. We finally reconstruct the face image by finding the best-matched pair of reconstructed facial expression image and facial identity image with least-squares error, and merging the two images with the reconstructed general face image (See Figure.1). It is important to note that, under this framework, other singular value decomposition (SVD) method can also be utilized, such as multidimensional scaling (MDS), independent component analysis (ICA), partial least squares regression (PLS).

fMRI experiment:

Functional MRI data were collected using a GE MR 750 3.0 Tesla scanner with a GE 8-channel head coil. During scan, participants viewed face images (shown in Figure 2) that were presented in random orders in an event-related design. Each participant viewed each face image once in one run and performed 10 runs during the whole experiment. Each participant also performed two localizer runs to localize this participant’s face selective regions at the individual level.

RESULTS:

Figure 3 shows the reconstructions of representative face images from one participant. We evaluated our reconstruction accuracies by calculating pairwise image similarities with least-squares error, and also an independent behavior experiments. We found that: 1. The reconstruction accuracies for facial expression and identity were significant, indicating that we can successfully reconstruct the multiple face attributes from fMRI signals with proposed framework; 2. The reconstruction accuracies of proposed framework were significantly higher than the traditional PCA, indicating the advantage of proposed framework over the traditional method; 3. The neural signals in FFA, OFA and aIT significantly contributed to the reconstructions of facial expression attribute, while the neural signals of posterior STS and amygdala significantly contributed to the reconstructions of facial identity attribute, suggesting the dissociating neural pathways medicating facial expression and identity attributes.

Conclusion and Discussion

In our study, we developed a new framework that can simultaneously extract the multidimensional facial attributes from functional MRI signals for perceived face image reconstruction. Our results showed facial expression and identity information could be reliably reconstructed from the fMRI signals, and the proposed framework achieved a better reconstruction performance than the traditional PCA method. Our results of further examining the face reconstruction accuracies in different face-selective regions provide strong evidence for the dissociation of neural pathways mediating facial expression and identity perception in human brain.

Acknowledgements

This work was supported by the Chinese Academy of Sciences under Grant No. 81871511.

References

  1. Cowen AS, Chun MM, Kuhl BA. Neural portraits of perception: reconstructing face images from evoked brain activity. Neuroimage.2014;94:12–22.
  2. Nestor A, Plaut DC, Behrmann M. Feature-based face representations and image reconstruction from behavioral and neural data. Proc Natl Acad Sci U S A. 2016;113(2):416-21.
  3. Lee H, Kuhl BA. Reconstructing perceived and retrieved faces from activity patterns in lateral parietal cortex. J Neurosci. 2016; 36(22):069-82.
  4. Güçlütürk Y, Güçlü U, Seeliger K, et al. Reconstructing perceived faces from brain activations with deep adversarial neural decoding. Advances in Neural Information Processing Systems 30 (NIPS 2017) pre-proceedings; 2017.
  5. Bruce V, Young A. Understanding face recognition. Comment in Br J Psychol.1990;81:361–80.
  6. Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci.2000;4:223–233.
  7. Ahs F, Davis CF, Gorka AX, Hariri AR. Feature-based representations of emotional facial expressions in the human amygdala. Soc Cogn Affect Neurosci.2014;9:1372–1378.
  8. Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci.1997;17:4302–11.
  9. Kriegeskorte N, Formisano E, Sorger B, et al. Individual faces elicit distinct response patterns in human anterior temporal cortex. Proc Natl Acad Sci USA.2007;104:20600–20605.

Figures

Figure 1. The illustration of proposed framework for reconstructing faces from fMRI signals.

Figure 2. A. The face stimuli set selected from KDEF dataset for fMRI data collection. These faces belonged to 8 different individuals, each depicting 7 basic facial expressions: neutral, fearful, angry, and happy. Half of the individuals were female and half were male. B. Schematic showing the main task in the slow event-related design fMRI experiment.

Figure 3. Examples of face images and their reconstructions for one participant. From left, the first column shows the original images, the second column shows the reconstructions with traditional PCA method; the remaining columns are the reconstructions of facial expressions in pSTS and amygdala, the reconstructions of facial identities in FFA, OFA and aIT, and the reconstructions with proposed framework.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0359