Commit
·
dd7f193
1
Parent(s):
6bf16bd
Update README.md
Browse files
README.md
CHANGED
@@ -16,4 +16,11 @@ Wasserstein GANs With with Gradient Penalty : [Paper](https://arxiv.org/abs/1704
|
|
16 |
|
17 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
18 |
|
19 |
-
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
18 |
|
19 |
+
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|
20 |
+
|
21 |
+
<details>
|
22 |
+
<summary>View Model Summary</summary>
|
23 |
+
|
24 |
+

|
25 |
+
|
26 |
+
</details>
|