PI: Aleksander Madry, Department of Electrical Engineering & Computer Science, MIT
Generative adversarial networks (GANs) are a general approach to tackling unsupervised learning tasks using deep neural networks. This approach is widely used and very successful in practice, but not really understood from theoretical point of view. This proposal aims to remedy that problem by developing theoretical foundations for studying GANs and, more broadly, unsupervised deep learning. Specifically, the PI will build a rigorousand principled framework for unsupervised deep learning using GANs, and analyze its expressive power and complexity. The resulting understanding will then be used to guide the design of new, improved methods for training GANs. These methods, and the heuristics they inspire, will be tested experimentally.
Year 1 Report
- Project Title: Towards Theoretical Foundations of Unsupervised Deep Learning
- Principal Investigator: Aleksander Madry, Department of Electrical Engineering & Computer Science, MIT
- Grant Period: February 2017 – January 2018
The research supported by the project “Towards Theoretical Foundations of Unsupervised Deep Learning” led to two key results:
- The first result  gives a precise understanding of the impact of using first order-based approximation methods in Generative Adversarial Network (GAN) training. More precisely, it shows that some of the widely used heuristics for training GANs are fundamentally unable to lead to reliable and fully successful training. It also puts forth the first fully rigorous proof of successful convergence of GAN training for a non-trivial class of distributions.
- The second one  provides a general, automatic and scalable framework for evaluating the quality of state-of-the-art GANs. Specifically, its goal is to evaluate to what extent are such GANs able to capture important diversity aspects of the distribution they are trained on.
Beyond this research, this proposal also led to PI’s visit to Skoltech. This visit provided him with an opportunity to learn about the research of Skoltech’s faculty and the exciting vision of the school. Below we describe the above-mentioned results in more detail.
Towards Understanding the Dynamics of Generative Adversarial Networks
Generative Adversarial Networks (GANs) have recently been proposed as a novel framework for learning generative models . In a nutshell, the key idea of GANs is to learn both the generative model and the loss function at the same time. The resulting training dynamics are usually described as a game between a generator (the generative model) and a discriminator (the loss function). The goal of the generator is to produce realistic samples that fool the discriminator, while the discriminator is trained to distinguish between the true training data and samples from the generator. GANs have shown promising results on a variety of tasks, and there is now a large body of work that explores the power of this framework .
Unfortunately, reliably training GANs is a challenging problem that often hinders further research and applicability in this area. Practitioners have encountered a variety of obstacles in this context such as vanishing gradients, mode collapse, and diverging or oscillatory behavior. At the same time, the theoretical underpinnings of GAN dynamics are not yet well understood. To date, there were no convergence proofs for GAN models, even in very simple settings. As a result, the root cause of frequent failures of GAN dynamics in practice remains unclear.
Our result takes a first step towards a principled understanding of GAN dynamics. Our general methodology is to propose and examine a problem setup that exhibits all common failure cases of GAN dynamics while remaining sufficiently simple to allow for a rigorous analysis. Concretely, we introduce and study the GMM-GAN: a variant of GAN dynamics that captures learning a mixture of two univariate Gaussians. We first show experimentally that standard gradient dynamics of the GMM-GAN often fail to converge due to mode collapse or oscillatory behavior. Interestingly, this also holds for techniques that were recently proposed to improve GAN training such as unrolled GANs 
In contrast, we then show that GAN dynamics with an optimal discriminator do converge, both experimentally and provably. To the best of our knowledge, our theoretical analysis of the GMM- GAN is the first global convergence proof for parametric and non-trivial GAN dynamics. This demonstrates a clear dichotomy between the dynamics arising from applying simultaneous gradient descent and the one that is able to use an optimal discriminator. The GAN with optimal discriminator provably converges from (essentially) any starting point. On the other hand, the simultaneous gradient GAN empirically often fails to converge, even when the discriminator is allowed many more gradient steps than the generator. These findings go against the common wisdom that first order methods are sufficiently strong for all deep learning applications.
Finally, by carefully inspecting our models, we are able to pinpoint some of the causes of this, and we highlight a phenomena we call discriminator collapse which often causes first order methods to fail in our setting.
A Classification-Based Perspective on GAN Distributions
One of the key qualities of GANs is that they achieve impressive results in producing realistic samples of natural images. In particular, they have become a promising approach to learning generative models of image distributions. How well can GANs learn the truly underlying data distribution though? Answering this question is key to properly understanding the power and limitations of the GAN framework, and the effectiveness of the adversarial setup.
The focus of our result  is exactly to address this question. Specifically, we aim to develop a quantitative and universal methodology for studying diversity in GAN distributions. The approach we take uses classification as a lens to examine the diversity of GAN distributions. More precisely, we aim to measure the covariate shift with respect to the true distribution that GANs introduce. The key motivation here is that were GANs able to fully learn the true distribution, they would exhibit no such shift.
Using this methodology, we demonstrate two specific forms of covariate shift caused by GANs: (1) mode collapse, which has been observed in prior work [4,5]; and (2) boundary distortion, a phenomenon identified in our work and corresponding to a drop in diversity of the periphery of the learned distribution.
Importantly, our methods need minimal human supervision and can easily be scaled to evaluating state-of-the-art GANs on the same datasets they are typically trained on. To demonstrate this, we chose five popular GANs and studied them on the CelebA and LSUN datasets -- arguably the two most well known datasets in the context of GANs.
We find that all the studied GAN setups suffer significantly from the types of diversity loss we mentioned above. As a result, our work pinpoints a key shortcommings of the current GANs and identifies the challenges that further development of GANs needs to address.
- Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, "Generative Adversarial Nets", Advances in Neural Information Processing Systems (NIPS), 2014.
- Jerry Li, Aleksander Mądry, John Peebles, Ludwig Schmidt (alphabetic order), “Towards Understanding the Dynamics of Generative Adversarial Networks”, poster presentation at the ICML 2017 Principled Approaches to Deep Learning workshop.
- Shibani Santurkar, Ludwig Schmidt, Aleksander Mądry, “A Classification-Based Perspective on GAN Distributions”, spotlight presentation at the NIPS 2017 Deep Learning: Bridging Theory and Practice workshop.
- Ian Goodfellow, "Generative Adversarial Nets", Tutorial in Advances in Neural Information Processing Systems (NIPS), 2016.
- Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein, "Unrolled Generative Adversarial Networks", Advances in Neural Information Processing Systems (NIPS), 2016.