R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.
This leads to the following regularization term:
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Generation | 120 | 16.06% |
Disentanglement | 47 | 6.29% |
Image Manipulation | 33 | 4.42% |
Face Generation | 30 | 4.02% |
Face Recognition | 25 | 3.35% |
Diversity | 23 | 3.08% |
Decoder | 18 | 2.41% |
Image-to-Image Translation | 18 | 2.41% |
Face Swapping | 17 | 2.28% |
This feature is experimental; we are continuously improving our matching algorithm.