 We developed a novel cross-modal autoencoder framework to integrate different types of medical imaging data and create a more comprehensive understanding of patient health. This framework can be used to improve phenotype prediction from a single modality, such as electrocardiogram, ECG, or to impute missing data from one modality with data from another, such as cardiac MRI. It can also be used to perform genome-wide association studies without requiring any supervision. This article was authored by Aditya Narayanan Rada Krishnan, Sam F. Friedman, Sean Kershid, and others.