Real-world databases are complex and usually require dealing with heterogeneous and mixed data types making the exploitation of shared information between views a critical issue. For this purpose, recent studies based on deep generative models merge all views into a nonlinear complex latent space, which can share information among views. However, this solution limits the model’s interpretability, flexibility, and modularity. We propose a novel method to overcome these limitations by combining multiple Variational AutoEncoders (VAE) with a Factor Analysis latent space (FA-VAE). We use VAEs to learn a private representation of each heterogeneous view in a continuous latent space. Then, we share the information between views by a low-dimensional latent space using a linear projection matrix. This way, we create a flexible and modular hierarchical dependency between private and shared information in which new views can be incorporated afterwards. Beyond that, we can condition pre-trained models, cross-generate data from different domains, and perform transfer learning between generative models.