Vanilla autoencoders(AE), denoising autoencoders(DAE), variational autoencoders(VAE), and conditional variational autoencoders(CVAE) are explained in this post. Referring to the previous post on Bayesian statistics may help your understanding.
As seen in the above structure, autoencoders have the same input and output size. Ultimately, we want the output to be the same as the input. We penalize the difference of the input and the output .
We can formulate the simplest autoencoder (with a single fully connected layer at each side) as:
Since we want , we get the following optimization problem:
The is the loss function, which calculates the difference between and . We can use square error or cross-entropy, which are written as:
We will use cross-entropy error, which we will specially denote as .
We can view this loss function in terms of expectation:
where denotes the empirical distribution associated with our training examples.
Denoising Autoencoders (DAE)
With the encoder and decoder formula the same, denoising autoencoders intentionally drop a specific portion of the pixels of the input to zero, creating . Formally, we are sampling from a stochastic mapping . We can compute the loss between the original and the output .
In formulating our objective function, we cannot use that of the vanilla autoencoder since now is a deterministic function of , not . Thus we need to take into account the connection between and , which is . Then we can write our optimization problem and expand it as:
where . Since we cannot compute the expectation in the second line, we approximate it with the Monte Carlo technique by drawing samples and computing their mean loss.
Variational Autoencoders (VAE)
VAEs have the same network structure with AEs; an encoder that calculates latent variable and a decoder that generates output image . Also, we train both networks such that the output image and the input image are the same. However, their goal is what’s different. The goal of an autoencoder is to generate the best feature vector from an image, whereas the goal of a variational autoencoder is to generate realistic images from the vector .
Also, the network structure of AEs and VAEs are not exactly the same. The encoder of an AE directly calculates the latent variable from the input. On the other hand, the encoder of a VAE calculates the parameters of a Gaussian distribution ( and ), where we then sample our from. This is true for the decoder too. AEs output the image itself, but VAE output parameters for the image pixel distribution. Let us put this more formally.
Let a standard normal distribution be the prior distribution of latent variable . Given an input image , we have our encoder network calculate the posterior distribution . Then we sample our latent variable from the posterior distribution.
Given a latent variable , the likelihood of our decoder outputting (the input image) is . We usually interpret this as a Multivariate Bernoulli where each pixel of the image corresponds to a dimension.
The Optimization Problem
We want to sample from the posterior , which can be expanded with the Bayes Rule.
However , the evidence, is intractable since we need to integrate over all possible . Thus without calculating the posterior , we’ll try to approximate it with a Gaussian distribution . We call this variational inference.
Since we want the two distributions and to be similar, we adopt the Kullback-Leibler Divergence and try to minimize it with respect to parameter .
The problem here is that the intractable term is still present. Now let us write the above equation in terms of .
KL divergences are always non-negative, and we want to minimize it with respect to . This is equivalent to maximizing the ELBO with respect to . The abbreviation is revealed: Evidence Lower BOund. This can also be understood as maximizing the evidence since we want to maximize the probability of getting the exact input image from the output.
Let’s inspect the term. Since no two input images share the same latent variable , we can write for a single input image .
Now shifting our attention back to the network structure, our encoder network calculates the parameters of , and our decoder network calculates the likelihood . Thus we can rewrite the above results so that the parameters match those of the autoencoder described above.
Negating , we obtain our loss function for sample .
Thus our optimization problem becomes
Understanding the loss function
The first underlined part (excluding the negative sign) is to be maximized. This is called the reconstruction loss: how similar the reconstructed image is to the input image. For each latent variable we sample from the approximated posterior , we calculate the log-likelihood of the decoder producing . Thus maximizing this term is equivalent to the maximum likelihood estimation.
The second term is the Kullback-Leibler Divergence between the approximated posterior and the prior . This acts as a regularizer, forcing the approximated posterior to be similar to the prior distribution, which is a standard normal distribution.
The above plots 2-dimensional latent variables of 500 test images for an AE and a VAE. As you can see, the distribution of latent variables of VAEs is close to the standard normal distribution, which is due to the regularizer. This is a virtue because, with this property, we can just easily sample a vector from the standard normal distribution and feed it to the decoder network to generate a reasonable image. This is ideal because VAEs were intended as a generator.
Calculating the loss function
To train our VAE, we should be able to calculate the loss. Let’s start with the regularizer term.
We create our encoder network such that it calculates the mean and standard deviation of . We then sample vector from this Multivariate Gaussian distribution: .
The KL divergence between two normal distributions is known. We can calculate the regularizer term as:
Now let’s look at the reconstruction loss term. To calculate the log-likelihood of our image , we should choose how to model our output. We have two choices.
Multivariate Bernoulli Distribution
This is often reasonable for black and white images like those from MNIST. We binarize the training and testing images with threshold 0.5. We can implement this easily with pytorch:
image = (image >= 0.5).float()
Each output of the decoder corresponds to a single pixel of the image, denoting the probability of the pixel being white. Then we can use the Bernoulli probability mass funtion as our likelihood.
This is equivalent to the cross-entropy loss.
Multivariate Gaussian Distribution
The probability density function of a Gaussian distribution is as follows.
Using this in our likelihood,
Notice that if we fix , we get the square error.
Now we’ve calculated the posterior , we can look at the whole reconstruction loss term. Unfortunately, the expectation is difficult to compute since it takes into account every possible . So we use the Monte Carlo approximation of expectation by sampling ’s from and take their mean log likelihood.
For convenience, we use in implementation.
Conditional Variational Autoencoders (CVAE)
The CVAE has the same structure and loss function as the VAE, but the input data is different. Notice that in VAEs, we never used the labels of our training data. If we have labels, why don’t we use them?
Now in conditional variational autoencoders, we concatenate the onehot labels with the input images, and also with the latent variables. Everything else is the same.
What do we get by doing this? One good thing about this is that the latent variable no longer needs to encode which label the input is. It only needs to encode its styles, or the class-invariant features of that image.
Then, we can concatenate any onehot vector to generate an image of the intended class with the specific style encoded by the latent variable.
For more images on generation, check out my repository’s README file.