Disentangled Face Representations in Deep Generative Models and the Human Brain

How does the human brain recognize faces and represent their many features? Despite decades of research, we still lack a thorough understanding of the computations carried out in face-selective regions of the human brain. Deep networks provide good match to neural data, but lack interpretability. Here we use a new class of deep generative models, disentangled representation learning models, which learn a latent space where each dimension “disentangles” a different interpretable dimension of faces, such as rotation, lighting, or hairstyle. We show that these disentangled networks are a good encoding model for human fMRI data. We further find that the latent dimensions in these models map onto non-overlapping regions in fMRI data, allowing us to "disentangle" different features such as 3D rotation, skin tone, and facial expression in the human brain. These methods provide an exciting alternative to standard “black box” deep learning methods, and have the potential to change the way we understand representations of visual processing int he human brain.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here