Understanding the Power and Limitations of Generative Adversarial Networks (GANs)

PI: Aleksander Madry, Department of Electrical Engineering & Computer Science, MIT

Abstract

This project aims to establish a principled understanding of the power and limitations of the generative adversarial networks (GANs), a general and very promising framework for solving distribution learning tasks. Specifically, the PI plans to develop a precise theoretical grasp of the systemic failure of state-of-the-art GANs to capture key aspects of the diversity of the learned distributions that his recent work (supported by his previous seed fund grant) identified. He then intends to use these insights to design a new variant of GAN that is free of this kind of issues, or provide evidence that these issues are an inherent feature of the whole GAN framework. 

Report

  • Project Title: Understanding the Power and Limitations of Generative Adversarial Networks (GANs
  • Principal Investigator: Aleksander Madry, Department of Electrical Engineering & Computer Science, Massachusetts Institute of Technology
  • Grant Period: March 2018 – February 2019

The research supported by the project “Understanding the Power and Limitations of Generative Adversarial Networks (GANs)” led to two key results:

1. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry, "Adversarially Robust Generalization Requires More Data", Advances in Neural Information Processing Systems (NeurIPS), 2018.

2. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Mądry, "Robustness May Be at Odds with Accuracy", International Conference on Learning Representations (ICLR), 2019.

Both these results revolve around understanding the phenomenon of adversarially robust generalization, i.e., generalization that is (worst-case) stable with respect to a set of perturbations. This notion of generalization became fundamental to the study of ML models that offer robust predictions and also it can be seen as a stepping stone towards studying the saddle point dynamics that underlies the GAN framework. In the first paper, we studied the sample-complexity of adversarial generalization. We show that already in a simple natural data model this complexity can be significantly larger than that of standard “non-robust” generalization. The exhibited gap is information theoretic and holds irrespective of the training algorithm or the model family used. We also demonstrate such discrepancy in real world datasets. In the second paper, we demonstrate that there is an inherent tension between the goal of generalizing in an adversarially robust way and to achieve standard generalization. Specifically, we show that there exist fairly simple and natural distributions such that any classifier that robustly solves the corresponding classification task has to necessarily deliver suboptimal standard accuracy. These findings also corraborate a similar phenomenon observed in practice. Further, we argue that classifiers that generalize in an adversarially robust way, learn fundamentally different data representations. These representations tend to align better with salient data characterstics and human perception

Back to the list >>