Machine Learning algorithms have achieved impressive milestones on image generation and prediction tasks, yet these achievements have not often translated into advances for medical applications. This is because medical problems are fundamentally different than those in computer vision: (i) while medical diagnoses are often binary (healthy/disease), the disease itself is a “continuous process” from which we only observe a few snapshots at various points in time and (ii) the image data (e.g., MRI, CT) is under-sampled and corrupted by patient motion in the scanner. In this talk, I will present two generative models that tackle these issues. First, I will present (i) DIVE, a Bayesian spatio-temporal model that estimates the continuous progression of Alzheimer’s disease. Secondly, I will present (ii) BRGM, a Bayesian deep learning method that leverages StyleGAN2 to estimate priors over clean images, and then applies Bayes’ rule to estimate the posterior distribution over clean images given an input corrupted image. Taken together, these contributions enable Machine Learning models to correctly model medical diseases using suitable assumptions, and can make medical image acquisition significantly better, faster and cheaper.