The paper Bayesian GAN by Y. Saatchi and A.G. Wilson introduces a new formulation for GANs that applies Bayesian techniques, and outperforms some of the current state-of-the-art approaches. It is a true generalization in that we can recover the original GAN formulation given a specific choice of parameters, but authors also show that this formulation allows to sample from a family of generator and discriminators which avoids mode collapse. Posteriors on the parameters of the generator and the disciminator are defined mathematically as a marginalization over the noise inputs. An algorithm is presented to sample from the posterior distributions defining generators and discriminators, using SGHMC and MC. We will first explain the intuition behind the paper, describe the most important mathematical underpinnings and apply the algorithm to a new, simple problem. Finally, we extend the model to a new, unreleased dataset and show how it performs in comparison to other state of the art methods.
Autoencoders & VAEs
Convolutional Autoencoders have an extensive and successful record in applications such as representation learning, dimensionality reduction and learning data transformations. Recently, the concept of autoencoding has received even more attention with a rise of generative applications, which also make use of this extremely versatile concept. In this workshop, we will first introduce the concept and inner workings of autoencoders. We will then have a hands-on session with attendees implementing and training autoencoders on their own.
Generative Adversarial Networks
We describe a minimalistic implementation of Generative Adversarial Networks (GANs) in Keras. We train a simple GAN for the task of face synthesis on the CelebA dataset. The goal of this is to enhance understanding of the concepts, and to give an easy to understand hands-on example.