We usually do Gerneration with GAN.
For a picture x, its usually a popular distributed in high-dimensional space. In the figure below, the blue area is located just a great point probability in our database in.
So what is the main task of image generation is it?
To define the distribution of original data , in order to give a set as a probabilistic model parameters , the model can be obtained by a learning generator. We hope, and that is to find the appropriate parameters , so that the original data set of samples x probability G defined by the distribution of probability appears as large as possible.
In order to find such a parameter, we can start database m samples were randomly selected, and the probability of reaching each sample is generated based on the calculation , then these m multiply the probability corresponding to the sample, to obtain likelihood corresponding function.
Further specified likelihood function, we find maximum likelihood estimation parameters obtained , but also happens so that the KL divergence generated image and the real image is minimized.
Our generator is on the nature of a network, the network defines a probability distribution , we hope is that with as close as possible. The question is how to get it?
We put aside his previous question regardless. We turned to look discriminator.
Although we do not know and specific form, but we can sample the real picture and generate images.
For a given generator, we want to optimize the loss function in the following figure, the cross-entropy loss function is very similar to the form of the two-class problems.
We found that when the real picture and generate a picture with a small divergence, discriminator is difficult to discern the true picture and generate images. However, when both have a large divergence, discriminator is relatively easily be able to identify them.
Next, we have to solve it :
We found exactly correspond and the JS divergence.