Modeling conditional distributions of neural and behavioral data with masked variational autoencoders
Extracting the relationship between high-dimensional recordings of neural activity and complex behavior is a ubiquitous problem in systems neuroscience. Toward this goal, encoding and decoding models attempt to infer the conditional distribution of neural activity given behavior and vice versa, while dimensionality reduction techniques aim to extract interpretable low-dimensional representations. Variational autoencoders (VAEs) are flexible deep-learning models commonly used to infer low-dimensional embeddings of neural or behavioral data. However, it is challenging for VAEs to accurately model arbitrary conditional distributions, such as those encountered in neural encoding and decoding, and even more so simultaneously. Here, we present a VAE-based approach for accurately calculating such conditional distributions. We validate our approach on a task with known ground truth and demonstrate the applicability to high-dimensional behavioral time series by retrieving the conditional distri