doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1708.02556 | 30 | Lastly, we perform experiments with different numbers of generators. The MGAN models with 2, 3, 4 and 10 generators all successfully explore 8 modes but the models with more generators generate fewer points scattered between adjacent modes. We also examine the behavior of the diversity coefï¬cient β by training the 4-generator model with different values of β. Without the JSD force (β = 0), generated samples cluster around one mode. When β = 0.25, the JSD force is weak and generated data cluster near 4 different modes. When β = 0.75 or 1.0, the JSD force is too strong and causes the generators to collapse, generating 4 increasingly tight clusters. When β = 0.5, generators successfully cover all of the 8 modes. Please refer to Appendix C.1 for experimental details.
7
5.2 REAL-WORLD DATASETS
Next we train our proposed method on real-world databases from natural scenes to investigate its performance and scalability on much more challenging large-scale image data. | 1708.02556#30 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 31 | Next we train our proposed method on real-world databases from natural scenes to investigate its performance and scalability on much more challenging large-scale image data.
Datasets. We use 3 widely-adopted datasets: CIFAR-10 (Krizhevsky & Hinton, 2009), STL-10 (Coates et al., 2011) and ImageNet (Russakovsky et al., 2015). CIFAR-10 contains 50,000 32Ã32 training images of 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. STL-10, subsampled from ImageNet, is a more diverse dataset than CIFAR-10, containing about 100,000 96Ã96 images. ImageNet (2012 release) presents the largest and most diverse consisting of over 1.2 million images from 1,000 classes. In order to facilitate fair comparison with the baselines in (Warde-Farley & Bengio, 2016; Nguyen et al., 2017), we follow the procedure of (Krizhevsky et al., 2012) to resize the STL-10 and ImageNet images down to 48Ã48 and 32Ã32, respectively. | 1708.02556#31 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 32 | Evaluation protocols. For quantitative evaluation, we adopt the Inception score proposed in (2016), which computes exp (Ex [AL (p (y|x) ||p (y))]) where p (y|x) is the conditional abel distribution for the image x estimated by the reference Inception model (Szegedy et al.[2015). This metric rewards good and varied samples and is found to be well-correlated with human judg- ment (Salimans et al.| {2016). We use the code provided in to compute the Inception scores for 10 partitions of 50,000 randomly generated samples. For qualitative demonstra- tion of image quality obtained by our proposed model, we show samples generated by the mixture as well as samples produced by each generator. Samples are randomly drawn rather than cherry-picked. a | 1708.02556#32 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 33 | Model architectures. Our generator and discriminator architectures closely follow the DCGANâs design (Radford et al., 2015). The only difference is we apply batch normalization (Ioffe & Szegedy, 2015) to all layers in the networks except for the output layer. Regarding the classiï¬er, we empir- ically ï¬nd that our proposed MGAN achieves the best performance (i.e., fast convergence rate and high inception score) when the classiï¬er shares parameters of all layers with the discriminator ex- cept for the output layer. The reason is that this parameter sharing scheme would allow the classiï¬er and discriminator to leverage their common features and representations learned at every layer, thus helps to improve and speed up the training progress. When the parameters are not tied, the model learns slowly and eventually yields lower performance. | 1708.02556#33 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 34 | During training we observe that the percentage of active neurons chronically declined (see Ap- pendix C.2). One possible cause is that the batch normalization center (offset) is gradually shifted to the negative range, thus deactivating up to 45% of ReLU units of the generator networks. Our ad-hoc solution for this problem is to ï¬x the offset at zero for all layers in the generator networks. The rationale is that for each feature map, the ReLU gates will open for about 50% highest inputs in a minibatch across all locations and generators, and close for the rest.
We also experiment with other activation functions of generator networks. First we use Leaky ReLU and obtain similar results with using ReLU. Then we use MaxOut units (Goodfellow et al., 2013) and achieves good Inception scores but generates unrecognizable samples. Finally, we try SeLU (Klambauer et al., 2017) but fail to train our model. | 1708.02556#34 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 35 | Hyperparameters. Three key hyperparameters of our model are: number of generators K, coef- ï¬cient β controlling the diversity and the minibatch size. We use a minibatch size of [128/K] for each generator, so that the total number of samples for training all generators is about 128. We train models with 4 generators and 10 generators corresponding with minibatch sizes of 32 and 12 each, and ï¬nd that models with 10 generators performs better. For ImageNet, we try an additional setting with 32 generators and a minibatch size of 4 for each. The batch of 4 samples is too small for updating sufï¬cient statistics of a batch-norm layer, thus we drop batch-norm in the input layer of each generator. This 32-generator model, however, does not obtain considerably better results than the 10-generator one. Therefore in what follows we only report the results of models with 10 generators. For the diversity coefï¬cient β, we observe no signiï¬cant difference in Inception scores when varying the value of β but the quality of generated images declines when β is too low or too high. Generated samples by each generator vary more when β is low, and | 1708.02556#35 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 37 | 8
Inception results. We now report the Inception scores obtained by our MGAN and baselines in Tab. 1. It is worthy to note that only models trained in a completely unsupervised manner without label information are included for fair comparison; and DCGANâs and D2GANâs results on STL- 10 are available only for the models trained on 32Ã32 resolution. Overall, our proposed model outperforms the baselines by large margins and achieves state-of-the-art performance on all datasets. Moreover, we would highlight that our MGAN obtains a score of 8.33 on CIFAR-10 that is even better than those of models trained with labels such as 8.09 of Improved GAN (Salimans et al., 2016) and 8.25 of AC-GAN (Odena et al., 2016). In addition, we train our model on the original 96Ã96 resolution of STL-10 and achieve a score of 9.79±0.08. This suggests the MGAN can be successfully trained on higher resolution images and achieve the higher Inception score.
Table 1: Inception scores on different datasets. âââ denotes unavailable result. | 1708.02556#37 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 38 | Table 1: Inception scores on different datasets. âââ denotes unavailable result.
Model Real data WGAN (Arjovsky et al., 2017) MIX+WGAN (Arora et al., 2017) Improved-GAN (Salimans et al., 2016) ALI (Dumoulin et al., 2016) BEGAN (Berthelot et al., 2017) MAGAN (Wang et al., 2017) GMAN (Durugkar et al., 2016) DCGAN (Radford et al., 2015) DFM (Warde-Farley & Bengio, 2016) D2GAN (Nguyen et al., 2017) MGAN CIFAR-10 11.24±0.16 3.82±0.06 4.04±0.07 4.36±0.04 5.34±0.05 5.62 5.67 6.00±0.19 6.40±0.05 7.72±0.13 7.15±0.07 8.33±0.10 STL-10 26.08±0.26 â â â â â â â 7.54 8.51±0.13 7.98 9.22±0.11 ImageNet 25.78±0.47 â â â â â â â 7.89 9.18±0.13 8.25 9.32±0.10 | 1708.02556#38 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 39 | Image generation. Next we present samples randomly generated by our proposed model trained on the 3 datasets for qualitative assessment. Fig. 3a shows CIFAR-10 32Ã32 images containing a wide range of objects in such as airplanes, cars, trucks, ships, birds, horses or dogs. Similarly, STL- 10 48Ã48 generated images in Fig. 3b include cars, ships, airplanes and many types of animals, but with wider range of different themes such as sky, underwater, mountain and forest. Images generated for ImageNet 32Ã32 are diverse with some recognizable objects such as lady, old man, birds, human eye, living room, hat, slippers, to name a few. Fig. 4a shows several cherry-picked STL-10 96Ã96 images, which demonstrate that the MGAN is capable of generating visually appealing images with complicated details. However, many samples are still incomplete and unrealistic as shown in Fig. 4b, leaving plenty of room for improvement.
(a) CIFAR-10 32Ã32. (b) STL-10 48Ã48. (c) ImageNet 32Ã32. | 1708.02556#39 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 40 | Figure 3: Images generated by our proposed MGAN trained on natural image datasets. Due to the space limit, please refer to the appendix for larger plots.
Finally, we investigate samples generated by each generator as well as the evolution of these samples through numbers of training epochs. Fig. 5 shows images generated by each of the 10 generators in our MGAN trained on CIFAR-10 at epoch 20, 50, and 250 of training. Samples in each row corre9
(a) Cherry-picked samples. (b) Incomplete, unrealistic samples. | 1708.02556#40 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 41 | Figure 4: Images generated by our MGAN trained on the original 96Ã96 STL10 dataset.
spond to a different generator. Generators start to specialize in generating different types of objects as early as epoch 20 and become more and more consistent: generator 2 and 3 in ï¬ying objects (birds and airplanes), generator 4 in full pictures of cats and dogs, generator 5 in portraits of cats and dogs, generator 8 in ships, generator 9 in car and trucks, and generator 10 in horses. Generator 6 seems to generate images of frog or animals in a bush. Generator 7, however, collapses in epoch 250. One possible explanation for this behavior is that images of different object classes tend to have different themes. Lastly, Wang et al. (2016) noticed one of the causes for non-convergence in GANs is that the generators and discriminators constantly vary; the generators at two consecutive epochs of training generate signiï¬cantly different images. This experiment demonstrates the effect of the JSD force in preventing generators from moving around the data space.
(a) Epoch #20. (b) Epoch #50. (c) Epoch #250. | 1708.02556#41 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 42 | Figure 5: Images generated by our MGAN trained on CIFAR10 at different epochs. Samples in each row from the top to the bottom correspond to a different generator.
# 6 CONCLUSION
We have presented a novel adversarial model to address the mode collapse in GANs. Our idea is to approximate data distribution using a mixture of multiple distributions wherein each distribution captures a subset of data modes separately from those of others. To achieve this goal, we propose a minimax game of one discriminator, one classiï¬er and many generators to formulate an optimization problem that minimizes the JSD between Pdata and Pmodel, i.e., a mixture of distributions induced by the generators, whilst maximizes JSD among such generator distributions. This helps our model
10
generate diverse images to better cover data modes, thus effectively avoids mode collapse. We term our proposed model Mixture Generative Adversarial Network (MGAN). | 1708.02556#42 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 43 | 10
generate diverse images to better cover data modes, thus effectively avoids mode collapse. We term our proposed model Mixture Generative Adversarial Network (MGAN).
The MGAN can be efï¬ciently trained by sharing parameters between its discriminator and clas- siï¬er, and among its generators, thus our model is scalable to be evaluated on real-world large- scale datasets. Comprehensive experiments on synthetic 2D data, CIFAR-10, STL-10 and ImageNet databases demonstrate the following capabilities of our model: (i) achieving state-of-the-art Incep- tion scores; (ii) generating diverse and appealing recognizable objects at different resolutions; and (iv) specializing in capturing different types of objects by the generators.
# REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 5
Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. 1 | 1708.02556#43 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 44 | Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017. 1, 4, 1
David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversar- ial networks. arXiv preprint arXiv:1703.10717, 2017. 1
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artiï¬cial intelli- gence and statistics, pp. 215â223, 2011. 1, 5.2
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. 1
Ishan Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673, 2016. 1, 4, 1 | 1708.02556#44 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 45 | Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip HS Torr, and Puneet K Dokania. Multi- agent diverse generative adversarial networks. arXiv preprint arXiv:1704.02906, 2017. 1, 4
Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. 1, B
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672â2680, 2014. 1, 3.1, 3.1, B
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013. 5.2
Ferenc Husz´ar. How (not) to train your generative model: Scheduled sampling, likelihood, adver- sary? arXiv preprint arXiv:1511.05101, 2015. 2 | 1708.02556#45 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 46 | Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448â456, 2015. 5.2
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 5
G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. arXiv preprint arXiv:1706.02515, 2017. 5.2
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. 5.2
11
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012. 5.2
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730â3738, 2015. 1 | 1708.02556#46 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 47 | Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural net- work acoustic models. In Proc. ICML, volume 30, 2013. 5
Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. 1, 4, 5.1
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807â814, 2010. 5
J v Neumann. Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1):295â320, 1928. 4
Tu Dinh Nguyen, Trung Le, Hung Vu, and Dinh Phung. Dual discriminator generative adversarial nets. In Advances in Neural Information Processing Systems 29 (NIPS), pp. accepted, 2017. 1, 4, 5.1, 5.2, 1 | 1708.02556#47 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 48 | Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classiï¬er gans. arXiv preprint arXiv:1610.09585, 2016. 1, 5.2
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. 5.2, 1
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1, 4, 5.2
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234â2242, 2016. 1, 4, 5.2, 5.2, 1 | 1708.02556#48 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 49 | Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1â9, 2015. 5.2
Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. 2
Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Sch¨olkopf. Adagan: Boosting generative models. arXiv preprint arXiv:1701.02386, 2017. 1, 4
Ruohan Wang, Antoine Cully, Hyung Jin Chang, and Yiannis Demiris. Magan: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704.03817, 2017. 1
Yaxing Wang, Lichao Zhang, and Joost van de Weijer. Ensembles of generative adversarial net- works. arXiv preprint arXiv:1612.00991, 2016. 4, 5.2 | 1708.02556#49 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 51 | 12
# A APPENDIX: FRAMEWORK
In our proposed method, generators G1, G2, ... GK are deep convolutional neural networks param- eterized by θG. These networks share parameters in all layers except for the input layers. The input layer for generator Gk is parameterized by the mapping fθG,k (z) that maps the sampled noise z to the ï¬rst hidden layer activation h. The shared layers are parameterized by the mapping gθG (h) that maps the ï¬rst hidden layer to the generated data. The pseudo-code of sampling from the mixture is described in Alg. 1. Classiï¬er C and classiï¬er D are also deep convolutional neural networks that are both parameterized by θCD. They share parameters in all layers except for the last layer. The pseudo-code of alternatively learning θG and θCD using stochastic gradient descend is described in Alg. 2.
Algorithm 1 Sampling from MGANâs mixture of generators. 1: Sample noise z from the prior Pz. 2: Sample a generator index u from Mult (Ï1, Ï2, ..., ÏK) with predeï¬ned mixing probability Ï =
(Ï1, Ï2, ..., ÏK). | 1708.02556#51 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 52 | (Ï1, Ï2, ..., ÏK).
3: h = fθG,u (z) 4: x = gθG (h) 5: Return generated data x and the index u.
Algorithm 2 Alternative training of MGAN using stochastic gradient descent. 1: for number of training iterations do 2:
Sample a minibatch of M data points (xâ),x(), ...,x()) from the data distribution Pjgia.
3:
&
5:
6:
7:
8:
# x) x) son xO)
Sample a minibatch of N generated data points (x x) x) son xO) and N indices (u1, Ua, ..., wn) from the current mixture. N Lo= 8S loss, (x) Lp =~ ome log D (x(â¢) â x ne log [1 -D (x) Update classifier C and discriminator D by descending along their gradient: Vocn (Lo + Lp). Sample a minibatch of N generated data points (x 11) 4) soy x')) and N indices (u1, U2, ..., wn) from the current mixture. La =-$ DX, logD (x) ~ 25% log Cu, (x ) Update the mixture of generators G' by ascending along its gradient: Vo,Lg.
# N | 1708.02556#52 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 53 | # N
11) 4) soy x'))
)
# â β N
# n=1 log D
# n=1 log Cun
# 9: 10: end for
# B APPENDIX: PROOFS FOR SECTION 3.1
Proposition 1 (Prop. 1 restated). For ï¬xed generators G1, G2, ..., GK and mixture weights Ï1, Ï2, ..., ÏK, the optimal classiï¬er C â = C â
C* (x) = TPG (x) â an Tâ¢5jPG; (x) Pdata (x) D* xX) SS () Paata (X) + Pmoaet (X)
Proof. The optimal Dâ was proved in Prop. 1 in (Goodfellow, 2016). This section shows a similar proof for the optimal C â. Assuming that C â can be optimized in the functional space, we can calculate the functional derivatives of J (G, C, D)with respect to each Ck (x) for k â {2, ..., K}
13
and set them equal to zero: | 1708.02556#53 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 54 | 13
and set them equal to zero:
37 5 K K 5C;, (x) = maTOC) / (nr (x) log (: _ dC 9) + do TPs (x) log C;, 5) dx TPG, (X) â MPC, =) - - 5 (7a ern °
Setting δJ (G,C,D) to 0 for k â {2, ..., K}, we get:
# δCk(x)
ma, (x) _ mapa,(x) __ tKP GK () Ct (x) Cy(x) CK) wsPC.) _ yesults from Eq. 4 due to the fact that >}, jai TIPG; (x)
Ï1pG1 (x) 1 (x) Ï2pG2 (x) 2 (x) = = ... = C â C â C â (6)
Ch (x) = wsPC.) _ yesults from Eq. 4 due to the fact that >}, C# (x) = 1. jai TIPG; (x)
Reformulation of L (G1:K). Replacing the optimal C â and Dâ into Eq. (2), we can reformulate the objective function for the generator as follows: | 1708.02556#54 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 55 | L (G1:K) = J (G, C â, Dâ)
= J (G,C*, D*) Paata (X) |] + En log Ddata (X) + Pmodet (X) x~Pmodet 108 â TRDG, (%) {9° mar c sisi (7) k=1 j=1 TiPG (x) Pmodet (x) = Exw lo â_ 4 â Panta 6 Pdata (X) + Pmodel (X)
The sum of the first two terms in Eq. was shown a (Goodfellow et al.| 2014) to be 2 - JSD (Paatal|Pmoaet) â log 4. The last io B{*} of Eq. (7) is related to the orto D for the K dis- tributions: | 1708.02556#55 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 56 | Kk TeDG, (X += Som Bx~ Pe ae k=1 >â jai TPG; (x) K K K = Ss ThEx~ Po, flog pa, ( -y ThEx~ Pe, | log Ss TPG; (x) | + Ss Tr log Tr j= k=l K = So mH pa,) +H Dane (x) + So me log a k=l k=1 K = JSDx (Pays Pays Pax) + So me log (8) k=1
where H (P ) is the Shannon entropy for distribution P . Thus, L (G1:K) can be rewritten as:
K L (Gx) = â log 4 + 2-ISD (Paatal|Pmodet) â 8 - ISDx (Pax; Pos + Pax) â BY) me log me k=1
If the data distribution has the form: Paata (X) = ian
k=1 Ïkqk (x) Theorem 3 (Thm. 3 restated). where the mixture components qk (x)(s) are well-separated, the minimax problem in Eq. (2) or the optimization problem in Eq. (3) has the following solution: | 1708.02556#56 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 57 | Daw (x) = de (x), Vk = 1,...,K and pmodet ( -> Tk (X) = Paata (x)
, and the corresponding objective value of the optimization problem in Eq. (3) is âBH (7m) = âB we 17k log The
14
Proof. We ï¬rst recap the optimization problem for ï¬nding the optimal Gâ:
min (2 ISD (Paata||Pmodet) â 8» ISDz (Pay, Paz,» Pax)
The JSD in Eq. (8) is given by:
K TT, x ISD (Poy, Peas Pox) = So tHEx~Pe, [ive sen) - S melog mt, (9) k=1 ae 1 73PG; (x) k=1
The i-th expectation in Eq. (9) can be derived as follows:
xn Pe, [i see] < Exx Pe, [log 1] < 0 Vint T5PG; (x)
and the equality occurs if Sa = 1 almost everywhere or equivalently for almost every x Tj except for those in a zero measure set, we have:
pa, (x) > 0 => pa; (x) =0, Vj Fk (10)
Therefore, we obtain the following inequality: | 1708.02556#57 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 58 | pa, (x) > 0 => pa; (x) =0, Vj Fk (10)
Therefore, we obtain the following inequality:
K K 1 ISDw (Pers Pass Pax) < â Yo me log te = Sm log â = H(z) 7 :
and the equality occurs if for almost every x except for those in a zero measure set, we have:
Vk: pa, (x) > 0 => pa; (x) =0, Vj Ak
It follows that
2-JISD (Paata||Pmodet) â 8 -ISDx (Pay, Pas,---, Pax) > 0 â BH (mr) = â8H (x)
and we peak the minimum if pGk = qk, âk since this solution satisï¬es both
Pmodet (X) = yma (x) = Paata (x)
and the conditions depicted in Eq. (10). That concludes our proof.
# C APPENDIX: ADDITIONAL EXPERIMENTS
C.1 SYNTHETIC 2D GAUSSIAN DATA | 1708.02556#58 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 59 | # C APPENDIX: ADDITIONAL EXPERIMENTS
C.1 SYNTHETIC 2D GAUSSIAN DATA
The true data is sampled from a 2D mixture of 8 Gaussian distributions with a covariance matrix 0.02I and means arranged in a circle of zero centroid and radius 2.0. We use a simple architecture of 8 generators with two fully connected hidden layers and a classiï¬er and a discriminator with one shared hidden layer. All hidden layers contain the same number of 128 ReLU units. The input layer of generators contains 256 noise units sampled from isotropic multivariate Gaussian distribution N (0, I). We do not use batch normalization in any layer. We refer to Tab. 2 for more speciï¬cations of the network and hyperparameters. âSharedâ is short for parameter sharing among generators or between the classiï¬er and the discriminator. Feature maps of 8/1 in the last layer for C and D means that two separate fully connected layers are applied to the penultimate layer, one for C that outputs 8 logits and another for D that outputs 1 logit. | 1708.02556#59 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 60 | The effect of the number of generators on generated samples. Fig. 6 shows samples produced by MGANs with different numbers of generators trained on synthetic data for 25,000 epochs. The model with 1 generator behaves similarly to the standard GAN as expected. The models with 2, 3 and 4 generators all successfully cover 8 modes, but the ones with more generators draw fewer points scattered between adjacent modes. Finally, the model with 10 generators also covers 8 modes wherein 2 generators share one mode and one generator hovering around another mode.
15
Table 2: Network architecture and hyperparameters for 2D Gaussian data.
Operation G (z) : z â¼ N (0, I) Fully connected Fully connected Fully connected C (x) , D (x) Fully connected Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants Feature maps Nonlinearity 256 128 128 2 2 128 8/1 8 512 128 25,000 0.2 0.0002 β = 0.125 ReLU ReLU Linear Leaky ReLU Softmax/Sigmoid Shared? à â â â à Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.02I), 0 | 1708.02556#60 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 63 | Figure 6: Samples generated by MGAN models trained on synthetic data with 2, 3, 4 and 10 gener- ators. Generated data are in blue and data samples from the 8 Gaussians are in red.
The effect of β on generated samples. To examine the behavior of the diversity coefï¬cient β, Fig. 7 compares samples produced by our MGAN with 4 generators after 25,000 epochs of training with different values of β. Without the JSD force (β = 0), generated samples cluster around one mode. When β = 0.25, generated data clusters near 4 different modes. When β = 0.75 or 1.0, the JSD force is too strong and causes the generators to collapse, generating 4 increasingly tight clusters. When β = 0.5, generators successfully cover all of the 8 modes.
C.2 REAL-WORLD DATASETS | 1708.02556#63 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 64 | Fixing batch normalization center. During training we observe that the percentage of active neurons, which we deï¬ne as ReLU units with positive activation for at least 10% of samples in the minibatch, chronically declined. Fig. 8a shows the percentage of active neurons in generators trained on CIFAR-10 declined consistently to 55% in layer 2 and 60% in layer 3. Therefore, the quality of generated images, after reaching the peak level, started declining. One possible cause is that the batch normalization center (offset) is gradually shifted to the negative range as shown in the histogram in Fig. 8b. We also observe the same problem in DCGAN. Our ad-hoc solution for this problem, i.e., we ï¬x the offset at zero for all layers in the generator networks. The rationale is that for each feature map, the ReLU gates will open for about 50% highest inputs in a minibatch across all locations and generators, and close for the rest. Therefore, batch normalization can keep ReLU units alive even when most of their inputs are otherwise negative, and introduces a form of competition that encourages generators to âspecializeâ in different features. This measure | 1708.02556#64 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 65 | even when most of their inputs are otherwise negative, and introduces a form of competition that encourages generators to âspecializeâ in different features. This measure signiï¬cantly improves performance but does not totally solve the dying ReLUs problem. We ï¬nd that late in the training, the input to generatorsâ ReLU units became more and more right-skewed, causing the ReLU gates to open less and less often. | 1708.02556#65 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 68 | Figure 8: Observation of activate neuron rates and batch normalization centers in MGANâs genera- tors trained on CIFAR-10.
Experiment settings. For the experiments on three large-scale natural scene datasets (CIFAR- 10, STL-10, ImageNet), we closely followed the network architecture and training procedure of DCGAN. The speciï¬cations of our models trained on CIFAR-10, STL-10 48Ã48, STL-10 96Ã96 and ImageNet datasets are described in Tabs. (3, 4, 5, 6), respectively. âBNâ is short for batch normalization and âBN centerâ is short for whether to learn batch normalizationâs center or set it at zero. âSharedâ is short for parameter sharing among generators or between the classiï¬er and the discriminator. Feature maps of 10/1 in the last layer for C and D means that two separate fully connected layers are applied to the penultimate layer, one for C that outputs 10 logits and another for D that outputs 1 logit. Finally, Figs. (9, 10, 11, 12, 13) respectively are the enlarged version of Figs. (3a, 3b, 3c, 4a, 4b) in the main manuscript. | 1708.02556#68 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 69 | # Table 3: Network architecture and hyperparameters for the CIFAR-10 dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity Shared? G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 0.01 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã512 256 128 3 32Ã32Ã3 128 256 512 10/1 â â â à â â â à à à à à â â â à ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid à â â â â â â à Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0
17 | 1708.02556#69 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 71 | Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 1.0 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã1024 512 256 128 3 48Ã48Ã3 128 256 512 1024 10/1 â â â â à â â â â à à à à à à â â â â à ReLU ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0 Shared? à | 1708.02556#71 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 74 | Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 1.0 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã2046 1024 512 256 128 3 32Ã32Ã3 128 256 512 1024 2048 10/1 â â â â â à â â â â â à à à à à à à â â â â â à ReLU ReLU ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 | 1708.02556#74 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 76 | 18
Table 6: Network architecture and hyperparameters for the ImageNet dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 50 0.2 0.0002 β = 0.1 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã512 256 128 3 32Ã32Ã3 128 256 512 10/1 â â â à â â â à à à à à â â â à ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0 Shared? à â â â â â â à | 1708.02556#76 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02556 | 77 | Figure 9: Images generated by MGAN trained on the CIFAR-10 dataset.
19
Figure 10: Images generated by MGAN trained on the rescaled 48Ã48 STL-10 dataset.
20
Figure 11: Images generated by MGAN trained on the rescaled 32Ã32 ImageNet dataset.
21
Figure 12: Cherry-picked samples generated by MGAN trained on the 96Ã96 STL-10 dataset.
22
Figure 13: Incomplete, unrealistic samples generated by MGAN trained on the 96Ã96 STL-10 dataset.
23 | 1708.02556#77 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | [
{
"id": "1703.00573"
},
{
"id": "1701.00160"
},
{
"id": "1612.00991"
},
{
"id": "1701.02386"
},
{
"id": "1701.07875"
},
{
"id": "1703.10717"
},
{
"id": "1704.03817"
},
{
"id": "1506.03365"
},
{
"id": "1704.02906"
},
{
"id": "1706.02515"
},
{
"id": "1603.04467"
},
{
"id": "1606.00704"
},
{
"id": "1511.01844"
},
{
"id": "1511.05101"
},
{
"id": "1611.02163"
},
{
"id": "1610.09585"
},
{
"id": "1611.01673"
},
{
"id": "1511.06434"
}
] |
1708.02182 | 1 | # Stephen Merity 1 Nitish Shirish Keskar 1 Richard Socher 1
# Abstract
Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence including machine translation, learning tasks, language modeling, and question answering. In this paper, we consider the speciï¬c problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM- based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to- hidden weights as a form of recurrent regulariza- tion. Further, we introduce NT-ASGD, a vari- ant of the averaged stochastic gradient method, wherein the averaging trigger is determined us- ing a non-monotonic condition as opposed to be- ing tuned by the user. Using these and other reg- ularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In ex- ploring the effectiveness of a neural cache in con- junction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.
# 1. Introduction | 1708.02182#1 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 2 | # 1. Introduction
A naïve application of dropout (Srivastava et al., 2014) to an RNNâs hidden state is ineffective as it disrupts the RNNâs ability to retain long term dependencies (Zaremba et al., 2014). Gal & Ghahramani (2016) propose overcoming this problem by retaining the same dropout mask across multiple time steps as opposed to sampling a new binary mask at each timestep. Another approach is to regularize the network through limiting updates to the RNNâs hidden state. One such approach is taken by Semeniuta et al. (2016) wherein the authors drop updates to network units, speciï¬cally the input gates of the LSTM, in lieu of the units themselves. This is reminiscent of zone- out (Krueger et al., 2016) where updates to the hidden state may fail to occur for randomly selected neurons.
Instead of operating on the RNNâs hidden states, one can regularize the network through restrictions on the recur- rent matrices as well. This can be done either through restricting the capacity of the matrix (Arjovsky et al., 2016; Wisdom et al., 2016; Jing et al., 2016) or through element-wise interactions (Balduzzi & Ghifary, 2016; Bradbury et al., 2016; Seo et al., 2016). | 1708.02182#2 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 3 | Other forms of regularization explicitly act upon activa- tions such as batch normalization (Ioffe & Szegedy, 2015), recurrent batch normalization (Cooijmans et al., 2016), and layer normalization (Ba et al., 2016). These all introduce additional training parameters and can complicate the train- ing process while increasing the sensitivity of the model.
Effective regularization techniques for deep learning have been the subject of much research in recent years. Given the over-parameterization of neural networks, general- ization performance crucially relies on the ability to regularize the models sufï¬ciently. Strategies such as dropout (Srivastava et al., 2014) and batch normalization (Ioffe & Szegedy, 2015) have found great success and are now ubiquitous in feed-forward and convolutional neural networks. Naïvely applying these approaches to the case of recurrent neural networks (RNNs) has not been highly successful however. Many recent works have hence been focused on the extension of these regularization strategies to RNNs; we brieï¬y discuss some of them below. | 1708.02182#3 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 4 | In this work, we investigate a set of regularization strategies that are not only highly effective but which can also be used with no modiï¬cation to existing LSTM implementations. The weight-dropped LSTM applies recurrent regulariza- tion through a DropConnect mask on the hidden-to-hidden recurrent weights. Other strategies include the use of randomized-length backpropagation through time (BPTT), embedding dropout, activation regularization (AR), and temporal activation regularization (TAR).
As no modiï¬cations are required of the LSTM implemen- tation these regularization strategies are compatible with black box libraries, such as NVIDIA cuDNN, which can be many times faster than naïve LSTM implementations.
1Salesforce Research, Palo Alto, USA. Correspondence to: Stephen Merity <[email protected]>.
Effective methods for training deep recurrent networks have also been a topic of renewed interest. Once a model
Regularizing and Optimizing LSTM Language Models | 1708.02182#4 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 5 | has been deï¬ned, the training algorithm used is required to not only ï¬nd a good minimizer of the loss function but also converge to such a minimizer rapidly. The choice of the optimizer is even more important in the context of reg- ularized models since such strategies, especially the use of dropout, can impede the training process. Stochastic gradient descent (SGD), and its variants such as Adam (Kingma & Ba, 2014) and RMSprop (Tieleman & Hinton, 2012) are amongst the most popular training methods. These methods iteratively reduce the training loss through scaled (stochastic) gradient steps. In particular, Adam has been found to be widely applicable despite requiring less tuning of its hyperparameters. In the context of word-level language modeling, past work has empirically found that SGD outperforms other methods in not only the ï¬nal loss but also in the rate of convergence. This is in agreement with recent evidence pointing to the insufï¬ciency of adap- tive gradient methods (Wilson et al., 2017).
vent the use of black box RNN implementations that may be many times faster due to low-level hardware-speciï¬c op- timizations. | 1708.02182#5 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 6 | vent the use of black box RNN implementations that may be many times faster due to low-level hardware-speciï¬c op- timizations.
We propose the use of DropConnect (Wan et al., 2013) on the recurrent hidden to hidden weight matrices which does not require any modiï¬cations to an RNNâs formu- lation. As the dropout operation is applied once to the weight matrices, before the forward and backward pass, the impact on training speed is minimal and any standard RNN implementation can be used, including inï¬exible but highly optimized black box LSTM implementations such as NVIDIAâs cuDNN LSTM.
By performing DropConnect on the hidden-to-hidden weight matrices [U i, U f , U o, U c] within the LSTM, we can prevent overï¬tting from occurring on the recurrent connec- tions of the LSTM. This regularization technique would also be applicable to preventing overï¬tting on the recurrent weight matrices of other RNN cells. | 1708.02182#6 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 7 | Given the success of SGD, especially within the language modeling domain, we investigate the use of averaged SGD (ASGD) (Polyak & Juditsky, 1992) which is known to have superior theoretical guarantees. ASGD carries out itera- tions similar to SGD, but instead of returning the last iterate as the solution, returns an average of the iterates past a cer- tain, tuned, threshold T . This threshold T is typically tuned and has a direct impact on the performance of the method. We propose a variant of ASGD where T is determined on the ï¬y through a non-monotonic criterion and show that it achieves better training outcomes compared to SGD.
As the same weights are reused over multiple timesteps, the same individual dropped weights remain dropped for the entirety of the forward and backward pass. The result is similar to variational dropout, which applies the same dropout mask to recurrent connections within the LSTM by performing dropout on ht 1, except that the dropout is applied to the recurrent weights. DropConnect could also be used on the non-recurrent weights of the LSTM [W i, W f , W o] though our focus was on preventing over- ï¬tting on the recurrent connection.
# 2. Weight-dropped LSTM
# 3. Optimization
We refer to the mathematical formulation of the LSTM, | 1708.02182#7 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 8 | # 2. Weight-dropped LSTM
# 3. Optimization
We refer to the mathematical formulation of the LSTM,
it = Ï(W ixt + U iht â ft = Ï(W f xt + U f ht 1) â ot = Ï(W oxt + U oht 1) â Ëct = tanh(W cxt + U cht â ct = it â Ëct + ft â +Ëct 1 ht = ot â tanh(ct)
SGD is among the most popular methods for training deep learning models across various modalities including com- puter vision, natural language processing, and reinforce- ment learning. The training of deep networks can be posed as a non-convex optimization problem
min w 1 N N X i=1 fi(w), | 1708.02182#8 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 9 | where [W i, W f , W o, U i, U f , U o] are weight matrices, xt is the vector input to the timestep t, ht is the current ex- posed hidden state, ct is the memory cell state, and â is element-wise multiplication.
where fi is the loss function for the ith data point, w are the weights of the network, and the expectation is taken over the data. Given a sequence of learning rates, γk, SGD iteratively takes steps of the form
Preventing overï¬tting within the recurrent connections of an RNN has been an area of extensive research in language modeling. The majority of previous recurrent regulariza- tion techniques have acted on the hidden state vector ht 1, most frequently introducing a dropout operation between timesteps, or performing dropout on the update to the mem- ory state ct. These modiï¬cations to a standard LSTM prewk+1 = wk â γk Ëâf (wk), (1) | 1708.02182#9 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 10 | where the subscript denotes the iteration number and the Ëâ denotes a stochastic gradient that may be computed on a minibatch of data points. SGD demonstrably performs well in practice and also possesses several attractive theoretical properties such as linear convergence (Bottou et al., 2016), saddle point avoidance (Panageas & Piliouras, 2016) and
Regularizing and Optimizing LSTM Language Models | 1708.02182#10 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 11 | better generalization performance (Hardt et al., 2015). For the speciï¬c task of neural language modeling, tradition- ally SGD without momentum has been found to outperform other algorithms such as momentum SGD (Sutskever et al., 2013), Adam (Kingma & Ba, 2014), Adagrad (Duchi et al., 2011) and RMSProp (Tieleman & Hinton, 2012) by a sta- tistically signiï¬cant margin.
Motivated by this observation, we investigate averaged SGD (ASGD) to further improve the training process. ASGD has been analyzed in depth theoretically and many surprising results have been shown including its asymp- totic second-order convergence (Polyak & Juditsky, 1992; Mandt et al., 2017). ASGD takes steps identical to equa- tion (1) but instead of returning the last iterate as the solu- K i=T wi, where K is the total num- tion, returns ber of iterations and T < K is a user-speciï¬ed averaging trigger. | 1708.02182#11 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 12 | SGD to a neighborhood around a solution. In the case of SGD, certain learning-rate reduction strategies such as the step-wise strategy analogously reduce the learning rate by a ï¬xed quantity at such a point. A common strategy em- ployed in language modeling is to reduce the learning rates by a ï¬xed proportion when the performance of the modelâs primary metric (such as perplexity) worsens or stagnates. Along the same lines, one could make a triggering decision based on the performance of the model on the validation set. However, instead of averaging immediately after the validation metric worsens, we propose a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles; see Algorithm 1. Given that the choice of triggering is irre- versible, this conservatism ensures that the randomness of training does not play a major role in the decision. Anal- ogous strategies have also been proposed for learning-rate reduction in SGD (Keskar & Saon, 2015).
Algorithm 1 Non-monotonically Triggered ASGD (NT- ASGD) Inputs: Initial point w0, learning rate γ, logging interval L, non-monotone interval n. 1: Initialize k â 0, t â 0, T â 0, logs â [] 2: while stopping criterion not met do 3: | 1708.02182#12 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 14 | Despite its theoretical appeal, ASGD has found limited practical use in training of deep networks. This may be in part due to unclear tuning guidelines for the learning-rate schedule γk and averaging trigger T . If the averaging is triggered too soon, the efï¬cacy of the method is impacted, and if it is triggered too late, many additional iterations may be needed to converge to the solution. In this section, we describe a non-monotonically triggered variant of ASGD (NT-ASGD), which obviates the need for tuning T . Fur- ther, the algorithm uses a constant learning rate throughout the experiment and hence no further tuning is necessary for the decay scheduling.
While the algorithm introduces two additional hyperparam- eters, the logging interval L and non-monotone interval n, we found that setting L to be the number of iterations in an epoch and n = 5 worked well across various models and data sets. As such, we use this setting in all of our NT- ASGD experiments in the following section and demon- strate that it achieves better training outcomes as compared to SGD.
# 4. Extended regularization techniques
In addition to the regularization and optimization tech- niques above, we explored additional regularization tech- niques that aimed to improve data efï¬ciency during training and to prevent overï¬tting of the RNN model. | 1708.02182#14 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 15 | # 4.1. Variable length backpropagation sequences
Given a ï¬xed sequence length that is used to break a data set into ï¬xed length batches, the data set is not efï¬ciently used. To illustrate this, imagine being given 100 elements to perform backpropagation through with a ï¬xed backprop- agation through time (BPTT) window of 10. Any element divisible by 10 will never have any elements to backprop into, no matter how many times you may traverse the data set. Indeed, the backpropagation window that each element receives is equal to i mod 10 where i is the elementâs in- dex. This is data inefï¬cient, preventing 1 10 of the data set from ever being able to improve itself in a recurrent fash- ion, and resulting in 8 10 of the remaining elements receiving only a partial backpropagation window compared to the full possible backpropagation window of length 10. | 1708.02182#15 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 16 | Ideally, averaging needs to be triggered when the SGD it- erates converge to a steady-state distribution (Mandt et al., 2017). This is roughly equivalent to the convergence of
To prevent such inefï¬cient data usage, we randomly select the sequence length for the forward and backward pass in two steps. First, we select the base sequence length to be
Regularizing and Optimizing LSTM Language Models
seq with probability p and seq 2 with probability 1 â p, where p is a high value approaching 1. This spreads the start- ing point for the BPTT window beyond the base sequence length. We then select the sequence length according to N (seq, s), where seq is the base sequence length and s is the standard deviation. This jitters the starting point such that it doesnât always fall on a speciï¬c word divisible by seq or seq 2 . From these, the sequence length more efï¬ciently uses the data set, ensuring that when given enough epochs all the elements in the data set experience a full BPTT win- dow, while ensuring the average sequence length remains around the base sequence length for computational efï¬- ciency. | 1708.02182#16 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 17 | During training, we rescale the learning rate depending on the length of the resulting sequence compared to the original speciï¬ed sequence length. The rescaling step is necessary as sampling arbitrary sequence lengths with a ï¬xed learning rate favors short sequences over longer ones. This linear scaling rule has been noted as important for training large scale minibatch SGD without loss of accu- racy (Goyal et al., 2017) and is a component of unbiased truncated backpropagation through time (Tallec & Ollivier, 2017).
# 4.2. Variational dropout
In standard dropout, a new binary dropout mask is sampled each and every time the dropout function is called. New dropout masks are sampled even if the given connection is repeated, such as the input x0 to an LSTM at timestep t = 0 receiving a different dropout mask than the input x1 fed to the same LSTM at t = 1. A variant of this, variational dropout (Gal & Ghahramani, 2016), samples a binary dropout mask only once upon the ï¬rst call and then to repeatedly use that locked dropout mask for all repeated connections within the forward and backward pass. | 1708.02182#17 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 18 | While we propose using DropConnect rather than varia- tional dropout to regularize the hidden-to-hidden transition within an RNN, we use variational dropout for all other dropout operations, speciï¬cally using the same dropout mask for all inputs and outputs of the LSTM within a given forward and backward pass. Each example within the mini- batch uses a unique dropout mask, rather than a single dropout mask being used over all examples, ensuring di- versity in the elements dropped out.
the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a speciï¬c word will disappear within that pass, equivalent to performing variational dropout on the connection between the one-hot embedding and the embed- ding lookup.
# 4.4. Weight tying
Weight tying (Inan et al., 2016; Press & Wolf, 2016) shares the weights between the embedding and softmax layer, sub- stantially reducing the total parameter count in the model. The technique has theoretical motivation (Inan et al., 2016) and prevents the model from having to learn a one-to-one correspondence between the input and output, resulting in substantial improvements to the standard LSTM language model.
# 4.5. Independent embedding size and hidden size | 1708.02182#18 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 19 | # 4.5. Independent embedding size and hidden size
In most natural language processing tasks, both pre- trained and trained word vectors are of relatively low dimensionalityâfrequently between 100 and 400 dimen- sions in size. Most previous LSTM language models tie the dimensionality of the word vectors to the dimensional- ity of the LSTMâs hidden state. Even if reducing the word embedding size was not beneï¬cial in preventing overï¬t- ting, the easiest reduction in total parameters for a language model is reducing the word vector size. To achieve this, the ï¬rst and last LSTM layers are modiï¬ed such that their in- put and output dimensionality respectively are equal to the reduced embedding size.
# 4.6. Activation Regularization (AR) and Temporal Activation Regularization (TAR) | 1708.02182#19 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 20 | # 4.6. Activation Regularization (AR) and Temporal Activation Regularization (TAR)
L2-regularization is often used on the weights of the net- work to control the norm of the resulting model and reduce overï¬tting. In addition, L2 decay can be used on the in- dividual unit activations and on the difference in outputs of an RNN at different time steps; these strategies labeled as activation regularization (AR) and temporal activation regularization (TAR) respectively (Merity et al., 2017). AR penalizes activations that are signiï¬cantly larger than 0 as a means of regularizing the network. Concretely, AR is deï¬ned as
# 4.3. Embedding dropout
α L2(m â ht)
Following Gal & Ghahramani (2016), we employ embed- ding dropout. This is equivalent to performing dropout on the embedding matrix at a word level, where the dropout is broadcast across all the word vectorâs embedding. The re- maining non-dropped-out word embeddings are scaled by 1 pe where pe is the probability of embedding dropout. As | 1708.02182#20 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 22 | Using the notation from AR, TAR is deï¬ned as
β L2(ht â ht+1)
where β is a scaling coefï¬cient. As in Merity et al. (2017), the AR and TAR loss are only applied to the output of the ï¬nal RNN layer as opposed to being applied to all layers.
the recurrent weight matrices. For WT2, we increase the input dropout to 0.65 to account for the increased vocabu- lary size. For all experiments, we use AR and TAR values of 2 and 1 respectively, and tie the embedding and soft- max weights. These hyperparameters were chosen through trial and error and we expect further improvements may be possible if a ï¬ne-grained hyperparameter search were to be conducted. In the results, we abbreviate our approach as AWD-LSTM for ASGD Weight-Dropped LSTM.
# 5. Experiment Details
For evaluating the impact of these approaches, we perform language modeling over a preprocessed version of the Penn Treebank (PTB) (Mikolov et al., 2010) and the WikiText-2 (WT2) data set (Merity et al., 2016). | 1708.02182#22 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 23 | PTB: The Penn Treebank data set has long been a central data set for experimenting with language modeling. The data set is heavily preprocessed and does not contain capital letters, numbers, or punctuation. The vocabulary is also capped at 10,000 unique words, quite small in comparison to most modern datasets, which results in a large number of out of vocabulary (OoV) tokens.
WT2: WikiText-2 is sourced from curated Wikipedia ar- ticles and is approximately twice the size of the PTB data set. The text is tokenized and processed using the Moses tokenizer (Koehn et al., 2007), frequently used for machine translation, and features a vocabulary of over 30,000 words. Capitalization, punctuation, and numbers are retained in this data set.
All experiments use a three-layer LSTM model with 1150 units in the hidden layer and an embedding of size 400. The loss was averaged over all examples and timesteps. All em- bedding weights were uniformly initialized in the interval [â0.1, 0.1] and all other weights were initialized between [â 1 âH | 1708.02182#23 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 24 | For training the models, we use the NT-ASGD algorithm discussed in the previous section for 750 epochs with L equivalent to one epoch and n = 5. We use a batch size of 80 for WT2 and 40 for PTB. Empirically, we found rel- atively large batch sizes (e.g., 40-80) performed better than smaller sizes (e.g., 10-20) for NT-ASGD. After comple- tion, we run ASGD with T = 0 and hot-started w0 as a ï¬ne-tuning step to further improve the solution. For this ï¬ne-tuning step, we terminate the run using the same non- monotonic criterion detailed in Algorithm 1.
We carry out gradient clipping with maximum norm 0.25 and use an initial learning rate of 30 for all experiments. We use a random BPTT length which is N (70, 5) with proba- bility 0.95 and N (35, 5) with probability 0.05. The values used for dropout on the word vectors, the output between LSTM layers, the output of the ï¬nal LSTM layer, and em- bedding dropout where (0.4, 0.3, 0.4, 0.1) respectively. For the weight-dropped LSTM, a dropout of 0.5 was applied to | 1708.02182#24 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 25 | # 6. Experimental Analysis
We present the single-model perplexity results for both our models (AWD-LSTM) and other competitive models in Ta- ble 1 and 2 for PTB and WT2 respectively. On both data sets we improve the state-of-the-art, with our vanilla LSTM model beating the state of the art by approximately 1 unit on PTB and 0.1 units on WT2.
In comparison to other recent state-of-the-art models, our model uses a vanilla LSTM. Zilly et al. (2016) propose the recurrent highway network, which extends the LSTM to al- low multiple hidden state updates per timestep. Zoph & Le (2016) use a reinforcement learning agent to generate an RNN cell tailored to the speciï¬c task of language model- ing, with the cell far more complex than the LSTM. | 1708.02182#25 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 26 | Independently of our work, Melis et al. (2017) apply ex- tensive hyperparameter search to an LSTM based lan- guage modeling implementation, analyzing the sensitivity of RNN based language models to hyperparameters. Un- like our work, they use a modiï¬ed LSTM, which caps the input gate it to be min(1 â ft, it), use Adam with β1 = 0 rather than SGD or ASGD, use skip connections between LSTM layers, and use a black box hyperparameter tuner for exploring models and settings. Of particular interest is that their hyperparameters were tuned individually for each data set compared to our work which shared almost all hyperpa- rameters between PTB and WT2, including the embedding and hidden size for both data sets. Due to this, they used less model parameters than our model and found shallow LSTMs of one or two layers worked best for WT2.
Like our work, Melis et al. (2017) ï¬nd that the underly- ing LSTM architecture can be highly effective compared to complex custom architectures when well tuned hyperpa- rameters are used. The approaches used in our work and Melis et al. (2017) may be complementary and would be worth exploration.
# 7. Pointer models | 1708.02182#26 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 28 | Model Parameters Validation Test Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal & Ghahramani (2016) - Variational LSTM (medium) Gal & Ghahramani (2016) - Variational LSTM (medium, MC) Gal & Ghahramani (2016) - Variational LSTM (large) Gal & Ghahramani (2016) - Variational LSTM (large, MC) Kim et al. (2016) - CharCNN Merity et al. (2016) - Pointer Sentinel-LSTM Grave et al. (2016) - LSTM Grave et al. (2016) - LSTM + continuous cache pointer Inan et al. (2016) - Variational LSTM (tied) + augmented loss Inan et al. (2016) - Variational LSTM (tied) + | 1708.02182#28 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 29 | et al. (2016) - Variational LSTM (tied) + augmented loss Inan et al. (2016) - Variational LSTM (tied) + augmented loss Zilly et al. (2016) - Variational RHN (tied) Zoph & Le (2016) - NAS Cell (tied) Zoph & Le (2016) - NAS Cell (tied) Melis et al. (2017) - 4-layer skip connection LSTM (tied) 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 20M 66M 20M 20M 66M 66M 19M 21M â â 24M 51M 23M 25M 54M 24M â â â â â 86.2 82.2 81.9 ± 0.2 â 77.9 ± 0.3 â â 72.4 â â 75.7 71.1 67.9 â â 60.9 141.2 125.7 124.7 113.7 92.0 82.7 78.4 79.7 ± 0.1 78.6 ± 0.1 75.2 ± 0.2 73.4 ± 0.0 78.9 70.9 82.3 72.1 73.2 68.5 65.4 64.0 62.4 58.3 AWD-LSTM - 3-layer LSTM | 1708.02182#29 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 32 | Model Parameters Validation Test Inan et al. (2016) - Variational LSTM (tied) (h = 650) Inan et al. (2016) - Variational LSTM (tied) (h = 650) + augmented loss Grave et al. (2016) - LSTM Grave et al. (2016) - LSTM + continuous cache pointer Melis et al. (2017) - 1-layer LSTM (tied) Melis et al. (2017) - 2-layer skip connection LSTM (tied) 28M 28M â â 24M 24M 92.3 91.5 â â 69.3 69.1 87.7 87.0 99.3 68.9 65.9 65.9 AWD-LSTM - 3-layer LSTM (tied) 33M 68.6 65.8 AWD-LSTM - 3-layer LSTM (tied) + continuous cache pointer 33M 53.8 52.0
Table2. Single model perplexity over WikiText-2. Models noting tied use weight tying on the embedding and softmax weights. Our model, AWD-LSTM, stands for ASGD Weight-Dropped LSTM.
Regularizing and Optimizing LSTM Language Models | 1708.02182#32 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 33 | substantial improvements to the underlying neural lan- guage model, it remained an open question as to how ef- fective pointer augmentation may be, especially when im- provements such as weight tying may act in mutually ex- clusive ways.
The neural cache model (Grave et al., 2016) can be added on top of a pre-trained language model at negligible cost. The neural cache stores the previous hidden states in mem- ory cells and then uses a simple convex combination of the probability distributions suggested by the cache and the language model for prediction. The cache model has three hyperparameters: the memory size (window) for the cache, the coefï¬cient of the combination (which determines how the two distributions are mixed), and the ï¬atness of the cache distribution. All of these are tuned on the validation set once a trained language model has been obtained and require no training by themselves, making it quite inexpen- sive to use. The tuned values for these hyperparameters were (2000, 0.1, 1.0) for PTB and (3785, 0.1279, 0.662) for WT2 respectively. | 1708.02182#33 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 34 | Word Count âloss Word Count âloss . , of = to in <eos> and the a " that by was ) with for on as at 7632 9857 5816 2884 4048 4178 3690 5251 12481 3381 2540 1365 1252 2279 1101 1176 1215 1485 1338 879 -696.45 -687.49 Meridian -365.21 Churchill - -342.01 Blythe -283.10 -222.94 Sonic Richmond -216.42 -215.38 Starr -209.97 Australian Pagan -149.78 -127.99 Asahi -118.09 -113.05 Hu -107.95 Hedgehog -94.74 Burma 29 -93.01 -87.68 Mississippi -81.55 German -77.05 mill -59.86 <unk> Japanese 11540 161 137 67 97 75 101 74 234 54 39 181 43 29 35 92 72 108 67 33 5047.34 1057.78 849.43 682.15 554.95 543.85 429.18 416.52 366.36 365.19 316.24 295.97 285.58 266.48 263.65 260.88 241.59 241.23 237.76 231.11 Cooke | 1708.02182#34 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 35 | In Tables 1 and 2, we show that the model further improves the perplexity of the language model by as much as 6 per- plexity points for PTB and 11 points for WT2. While this is smaller than the gains reported in Grave et al. (2016), which used an LSTM without weight tying, this is still a substantial drop. Given the simplicity of the neural cache model, and the lack of any trained components, these re- sults suggest that existing neural language models remain fundamentally lacking, failing to capture long term depen- dencies or remember recently seen words effectively.
Table3. The sum total difference in loss (log perplexity) that a given word results in over all instances in the validation data set of WikiText-2 when the continuous cache pointer is introduced. The right column contains the words with the twenty best im- provements (i.e., where the cache was advantageous), and the left column the twenty most deteriorated (i.e., where the cache was disadvantageous).
likely well suited. These observations motivate the design of a cache framework that is more aware of the relative strengths of the two models. | 1708.02182#35 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 36 | likely well suited. These observations motivate the design of a cache framework that is more aware of the relative strengths of the two models.
To understand the impact the pointer had on the model, speciï¬cally the validation set perplexity, we detail the con- tribution that each word has on the cache modelâs overall perplexity in Table 3. We compute the sum of the total dif- ference in the loss function value (i.e., log perplexity) be- tween the LSTM-only and LSTM-with-cache models for the target words in the validation portion of the WikiText-2 data set. We present results for the sum of the difference as opposed to the mean since the latter undesirably overem- phasizes infrequently occurring words for which the cache helps signiï¬cantly and ignores frequently occurring words for which the cache provides modest improvements that cu- mulatively make a strong contribution. | 1708.02182#36 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 37 | The largest cumulative gain is in improving the handling of <unk> tokens, though this is over 11540 instances. The second best improvement, approximately one ï¬fth the gain given by the <unk> tokens, is for Meridian, yet this word only occurs 161 times. This indicates the cache still helps signiï¬cantly even for relatively rare words, further demon- strated by Churchill, Blythe, or Sonic. The cache is not beneï¬cial when handling frequent word categories, such as punctuation or stop words, for which the language model is
# 8. Model Ablation Analysis
In Table 4, we present the values of validation and test- ing perplexity for different variants of our best-performing LSTM model. Each variant removes a form of optimization or regularization. | 1708.02182#37 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 38 | In Table 4, we present the values of validation and test- ing perplexity for different variants of our best-performing LSTM model. Each variant removes a form of optimization or regularization.
The ï¬rst two variants deal with the optimization of the lan- guage models while the rest deal with the regularization. For the model using SGD with learning rate reduced by 2 using the same nonmonotonic fashion, there is a signiï¬- cant degradation in performance. This stands as empirical evidence regarding the beneï¬t of averaging of the iterates. Using a monotonic criterion instead also hampered perfor- mance. Similarly, the removal of the ï¬ne-tuning step ex- pectedly also degrades the performance. This step helps improve the estimate of the minimizer by resetting the memory of the previous experiment. While this process of ï¬ne-tuning can be repeated multiple times, we found little beneï¬t in repeating it more than once.
The removal of regularization strategies paints a similar the inclusion of all of the proposed strategies picture;
Regularizing and Optimizing LSTM Language Models | 1708.02182#38 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 39 | PTB WT2 Model Validation Test Validation Test AWD-LSTM (tied) 60.0 57.3 68.6 65.8 â ï¬ne-tuning â NT-ASGD 60.7 66.3 58.8 63.7 69.1 73.3 66.0 69.7 â variable sequence lengths â embedding dropout â weight decay â AR/TAR â full sized embedding â weight-dropping 61.3 65.1 63.7 62.7 68.0 71.1 58.9 62.7 61.0 60.3 65.6 68.9 69.3 71.1 71.9 73.2 73.7 78.4 66.2 68.1 68.7 70.1 70.7 74.9
Table4. Model ablations for our best LSTM models reporting results over the validation and test set on Penn Treebank and WikiText-2. Ablations are split into optimization and regularization variants, sorted according to the achieved validation perplexity on WikiText-2. | 1708.02182#39 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 40 | was pivotal in ensuring state-of-the-art performance. The most extreme perplexity jump was in removing the hidden- to-hidden LSTM regularization provided by the weight- dropped LSTM. Without such hidden-to-hidden regular- ization, perplexity rises substantially, up to 11 points. This is in line with previous work showing the neces- sity of recurrent regularization in state-of-the-art models (Gal & Ghahramani, 2016; Inan et al., 2016). | 1708.02182#40 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 41 | We also experiment with static sequence lengths which we had hypothesized would lead to inefï¬cient data usage. This also worsens the performance by approximately one per- plexity unit. Next, we experiment with reverting to match- ing the sizes of the embedding vectors and the hidden states. This signiï¬cantly increases the number of param- eters in the network (to 43M in the case of PTB and 70M for WT2) and leads to degradation by almost 8 perplexity points, which we attribute to overï¬tting in the word em- beddings. While this could potentially be improved with more aggressive regularization, the computational over- head involved with substantially larger embeddings likely outweighs any advantages. Finally, we experiment with the removal of embedding dropout, AR/TAR and weight decay. In all of the cases, the model suffers a perplexity increase of 2â6 points which we hypothesize is due to insufï¬cient regularization in the network. | 1708.02182#41 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 42 | vestigate other regularization strategies including the use of variable BPTT length and achieve a new state-of-the-art perplexity on the PTB and WikiText-2 data sets. Our mod- els outperform custom-built RNN cells and complex reg- ularization strategies that preclude the possibility of using optimized libraries such as the NVIDIA cuDNN LSTM. Finally, we explore the use of a neural cache in conjunc- tion with our proposed model and show that this further improves the performance, thus attaining an even lower state-of-the-art perplexity. While the regularization and op- timization strategies proposed are demonstrated on the task of language modeling, we anticipate that they would be generally applicable across other sequence learning tasks.
# References
Arjovsky, M., Shah, A., and Bengio, Y. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120â1128, 2016.
Ba, J., Kiros, J., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016.
Balduzzi, D. and Ghifary, M. Strongly-typed recurrent neu- ral networks. arXiv preprint arXiv:1602.02218, 2016.
# 9. Conclusion | 1708.02182#42 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 43 | # 9. Conclusion
Bottou, L., Curtis, F. E., and Nocedal, J. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
In this work, we discuss regularization and optimization strategies for neural language models. We propose the weight-dropped LSTM, a strategy that uses a DropConnect mask on the hidden-to-hidden weight matrices, as a means to prevent overï¬tting across the recurrent connections. Fur- ther, we investigate the use of averaged SGD with a non- monontonic trigger for training language models and show that it outperforms SGD by a signiï¬cant margin. We inBradbury, J., Merity, S., Xiong, C., and Socher, R. arXiv preprint Quasi-Recurrent Neural Networks. arXiv:1611.01576, 2016.
Cooijmans, T., Ballas, N., Laurent, C., and Courville, A. C. Recurrent batch normalization. CoRR, abs/1603.09025, 2016.
Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization.
Regularizing and Optimizing LSTM Language Models | 1708.02182#43 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 44 | Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011.
E. Moses: Open source toolkit for statistical machine translation. In ACL, 2007.
Földiák, P. Learning invariance from transformation se- quences. Neural Computation, 3(2):194â200, 1991.
Gal, Y. and Ghahramani, Z. A theoretically grounded appli- cation of dropout in recurrent neural networks. In NIPS, 2016.
Krueger, D., Maharaj, T., Kramár, J., Pezeshki, M., Bal- las, N., Ke, N., Goyal, A., Bengio, Y., Larochelle, H., Courville, A., et al. Zoneout: Regularizing RNNss by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. | 1708.02182#44 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 45 | Luciw, M. and Schmidhuber, J. Low complexity proto- value function learning from sensory observations with incremental slow feature analysis. Artiï¬cial Neural Net- works and Machine LearningâICANN 2012, pp. 279â 287, 2012.
Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
Mandt, S., Hoffman, M. D., and Blei, D. M. Stochastic gra- dient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289, 2017.
Hardt, M., Recht, B., and Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Hinton, G. E. Connectionist learning procedures. Artiï¬cial intelligence, 40(1-3):185â234, 1989.
Inan, H., Khosravi, K., and Socher, R. Tying Word Vectors and Word Classiï¬ers: A Loss Framework for Language Modeling. arXiv preprint arXiv:1611.01462, 2016. | 1708.02182#45 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 46 | Ioffe, S. and Szegedy, C. Batch normalization: Accelerat- ing deep network training by reducing internal covariate shift. In ICML, 2015.
Melis, G., Dyer, C., and Blunsom, P. On the State of the Art of Evaluation in Neural Language Models. arXiv preprint arXiv:1707.05589, 2017.
Merity, S., Xiong, C., Bradbury, J., and Socher, R. arXiv preprint Pointer Sentinel Mixture Models. arXiv:1609.07843, 2016.
Merity, S., McCann, B., and Socher, R. Revisiting acti- vation regularization for language rnns. arXiv preprint arXiv:1708.01009, 2017.
Jing, L., Shen, Y., DubËcek, T., Peurifoy, J., Skirlo, S., Tegmark, M., and SoljaËci´c, M. Tunable Efï¬cient Uni- tary Neural Networks (EUNN) and their application to RNN. arXiv preprint arXiv:1612.05231, 2016. | 1708.02182#46 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 47 | Mikolov, T. and Zweig, G. Context dependent recurrent neural network language model. SLT, 12:234â239, 2012.
Mikolov, T., Karaï¬Ã¡t, M., Burget, L., Cernocký, J., and Khudanpur, S. Recurrent neural network based language model. In INTERSPEECH, 2010.
Jonschkowski, R. and Brock, O. Learning state represen- tations with robotic priors. Auton. Robots, 39:407â428, 2015.
Keskar, N. and Saon, G. A nonmonotone learning rate strategy for sgd training of deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 4974â4978. IEEE, 2015.
Panageas, I. and Piliouras, G. Gradient descent converges to minimizers: The case of non-isolated critical points. CoRR, abs/1605.00405, 2016.
Polyak, B. and Juditsky, A. Acceleration of stochastic ap- proximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855, 1992. | 1708.02182#47 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 48 | Kim, Y., Jernite, Y., Sontag, D., and Rush, A. M. Character- aware neural language models. In Thirtieth AAAI Con- ference on Artiï¬cial Intelligence, 2016.
Press, O. and Wolf, L. Using the output embed- arXiv preprint ding to improve language models. arXiv:1608.05859, 2016.
Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Semeniuta, S., Severyn, A., and Barth, E. Recurrent dropout without memory loss. In COLING, 2016.
Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst,
Seo, M., Min, S., Farhadi, A., and Hajishirzi, H. Query- arXiv Reduction Networks for Question Answering. preprint arXiv:1606.04582, 2016.
Regularizing and Optimizing LSTM Language Models | 1708.02182#48 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 49 | Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
Sutskever, I., Martens, J., Dahl, G., and Hinton, G. On the importance of initialization and momentum in deep learning. In International conference on machine learn- ing, pp. 1139â1147, 2013.
Tallec, C. and Ollivier, Y. Unbiasing truncated backprop- agation through time. arXiv preprint arXiv:1705.08209, 2017.
Tieleman, T. and Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magni- tude. COURSERA: Neural networks for machine learn- ing, 4(2):26â31, 2012.
Wan, L., Zeiler, M., Zhang, S., LeCun, Y, and Fergus, R. Regularization of neural networks using dropconnect. In Proceedings of the 30th international conference on ma- chine learning (ICML-13), pp. 1058â1066, 2013. | 1708.02182#49 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.02182 | 50 | Wilson, A. C, Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.
Wisdom, S., Powers, T., Hershey, J., Le Roux, J., and Atlas, L. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880â4888, 2016.
Zaremba, W., Sutskever, I., and Vinyals, O. Recur- arXiv preprint rent neural network regularization. arXiv:1409.2329, 2014.
Zilly, J. G., Srivastava, R. K., KoutnÃk, J., and Schmid- huber, J. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
Zoph, B. and Le, Q. V. Neural architecture search with re- inforcement learning. arXiv preprint arXiv:1611.01578, 2016. | 1708.02182#50 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | [
{
"id": "1611.01462"
},
{
"id": "1606.01305"
},
{
"id": "1608.05859"
},
{
"id": "1612.05231"
},
{
"id": "1707.05589"
},
{
"id": "1705.08292"
},
{
"id": "1704.04289"
},
{
"id": "1611.01576"
},
{
"id": "1705.08209"
},
{
"id": "1607.03474"
},
{
"id": "1612.04426"
},
{
"id": "1609.07843"
},
{
"id": "1708.01009"
},
{
"id": "1509.01240"
},
{
"id": "1606.04838"
},
{
"id": "1611.01578"
},
{
"id": "1606.04582"
},
{
"id": "1706.02677"
},
{
"id": "1602.02218"
}
] |
1708.00489 | 1 | Ozan Senerâ Intel Labs [email protected]
Silvio Savarese Stanford University [email protected]
# ABSTRACT
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we deï¬ne the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method signiï¬cantly outperforms existing approaches in image classiï¬cation experiments by a large margin.
1
# INTRODUCTION | 1708.00489#1 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 2 | 1
# INTRODUCTION
Deep convolutional neural networks (CNNs) have shown unprecedented success in many areas of research in computer vision and pattern recognition, such as image classiï¬cation, object detection, and scene segmentation. Although CNNs are universally successful in many tasks, they have a major drawback; they need a very large amount of labeled data to be able to learn their large number of parameters. More importantly, it is almost always better to have more data since the accuracy of CNNs is often not saturated with increasing dataset size. Hence, there is a constant desire to collect more and more data. Although this a desired behavior from an algorithmic perspective (higher representative power is typically better), labeling a dataset is a time consuming and an expensive task. These practical considerations raise a critical question: âwhat is the optimal way to choose data points to label such that the highest accuracy can be obtained given a ï¬xed labeling budget.â Active learning is one of the common paradigms to address this question. | 1708.00489#2 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 3 | The goal of active learning is to ï¬nd effective ways to choose data points to label, from a pool of unlabeled data points, in order to maximize the accuracy. Although it is not possible to obtain a universally good active learning strategy (Dasgupta, 2004), there exist many heuristics (Settles, 2010) which have been proven to be effective in practice. Active learning is typically an iterative process in which a model is learned at each iteration and a set of points is chosen to be labelled from a pool of unlabelled points using these aforementioned heuristics. We experiment with many of these heuristics in this paper and ï¬nd them not effective when applied to CNNs. We argue that the main factor behind this ineffectiveness is the correlation caused via batch acquisition/sampling. In the classical setting, the active learning algorithms typically choose a single point at each iteration; however, this is not feasible for CNNs since i) a single point is likely to have no statistically signiï¬cant impact on the accuracy due to the local optimization methods, and ii) each iteration requires a full training until convergence which makes it intractable to query labels one-by-one. Hence, it is necessary to query
âWork is completed while author is at Stanford University.
1
Published as a conference paper at ICLR 2018 | 1708.00489#3 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 4 | âWork is completed while author is at Stanford University.
1
Published as a conference paper at ICLR 2018
labels for a large subset at each iteration and it results in correlated samples even for moderately small subset sizes.
In order to tailor an active learning method for the batch sampling case, we decided to deï¬ne the active learning as core-set selection problem. Core-set selection problem aims to ï¬nd a small subset given a large labeled dataset such that a model learned over the small subset is competitive over the whole dataset. Since we have no labels available, we perform the core-set selection without using the labels. In order to attack the unlabeled core-set problem for CNNs, we provide a rigorous bound between an average loss over any given subset of the dataset and the remaining data points via the geometry of the data points. As an active learning algorithm, we try to choose a subset such that this bound is minimized. Moreover, minimization of this bound turns out to be equivalent to the k-Center problem (Wolf, 2011) and we adopt an efï¬cient approximate solution to this combinatorial optimization problem. We further study the behavior of our proposed algorithm empirically for the problem of image classiï¬cation using three different datasets. Our empirical analysis demonstrates state-of-the-art performance by a large margin.
# 2 RELATED WORK | 1708.00489#4 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 5 | # 2 RELATED WORK
We discuss the related work in the following categories separately. Brieï¬y, our work is different from existing approaches in that i) it deï¬nes the active learning problem as core-set selection, ii) we consider both fully supervised and weakly supervised cases, and iii) we rigorously address the core-set selection problem directly for CNNs with no extra assumption.
Active Learning Active learning has been widely studied and most of the early work can be found in the classical survey of Settles (2010). It covers acquisition functions such as information theoretical methods (MacKay, 1992), ensemble approaches (McCallumzy & Nigamy, 1998; Freund et al., 1997) and uncertainty based methods (Tong & Koller, 2001; Joshi et al., 2009; Li & Guo, 2013). | 1708.00489#5 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 6 | Bayesian active learning methods typically use a non-parametric model like Gaussian process to estimate the expected improvement by each query (Kapoor et al., 2007) or the expected error after a set of queries (Roy & McCallum, 2001). These approaches are not directly applicable to large CNNs since they do not scale to large-scale datasets. A recent approach by Gal & Ghahramani (2016) shows an equivalence between dropout and approximate Bayesian inference enabling the application of Bayesian methods to deep learning. Although Bayesian active learning has been shown to be effective for small datasets (Gal et al., 2017), our empirical analysis suggests that they do not scale to large-scale datasets because of batch sampling.
One important class is that of uncertainty based methods, which try to ï¬nd hard examples using heuristics like highest entropy (Joshi et al., 2009), and geometric distance to decision boundaries (Tong & Koller, 2001; Brinker, 2003). Our empirical analysis ï¬nd them not to be effective for CNNs. | 1708.00489#6 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 7 | There are recent optimization based approaches which can trade-off uncertainty and diversity to obtain a diverse set of hard examples in batch mode active learning setting. Both Elhamifar et al. (2013) and Yang et al. (2015) design a discrete optimization problem for this purpose and use its convex surrogate. Similarly, Guo (2010) cast a similar problem as matrix partitioning. However, the optimization algorithms proposed in these papers use n2 variables where n is the number of data points. Hence, they do not scale to large datasets. There are also many pool based active learning algorithms designed for the speciï¬c class of machine learning algorithms like k-nearest neighbors and naive Bayes (Wei et al., 2015), logistic regression Hoi et al. (2006); Guo & Schuurmans (2008), and linear regression with Gaussian noise (Yu et al., 2006). Even in the algorithm agnostic case, one can design a set-cover algorithm to cover the hypothesis space using sub-modularity (Guillory & Bilmes, 2010; Golovin & Krause, 2011). On the other hand, Demir et al. (2011) uses a heuristic to ï¬rst | 1708.00489#7 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 8 | Bilmes, 2010; Golovin & Krause, 2011). On the other hand, Demir et al. (2011) uses a heuristic to ï¬rst ï¬lter the pool based on uncertainty and then choose point to label using diversity. Our algorithm can be considered to be in this class; however, we do not use any uncertainty information. Our algorithm is also the ï¬rst one which is applied to the CNNs. Most similar to ours are (Joshiy et al., 2010) and (Wang & Ye, 2015). Joshiy et al. (2010) uses a similar optimization problem. However, they offer no theoretical justiï¬cation or analysis. Wang & Ye (2015) proposes to use empirical risk minimization like us; however, they try to minimize the difference between two distributions (maximum mean discrepancy between iid. samples from the dataset and the actively selected samples) instead of | 1708.00489#8 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 9 | 2
Published as a conference paper at ICLR 2018
core-set loss. Moreover, both algorithms are also not experimented with CNNs. In our experimental study, we compare with (Wang & Ye, 2015).
Recently, a discrete optimization based method (Berlind & Urner, 2015) which is similar to ours has been presented for k-NN type algorithms in the domain shift setting. Although our theoretical analysis borrows some techniques from them, their results are only valid for k-NNs.
Active learning algorithms for CNNs are also recently presented in (Wang et al., 2016; Stark et al., 2015). Wang et al. (2016) propose an heuristic based algorithm which directly assigns labels to the data points with high conï¬dence and queries labels for the ones with low conï¬dence. Moreover, Stark et al. (2015) speciï¬cally targets recognizing CAPTCHA images. Although their results are promising for CAPTCHA recognition, their method is not effective for image classiï¬cation. We discuss limitations of both approaches in Section 5. | 1708.00489#9 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 10 | On the theoretical side, it is shown that greedy active learning is not possible in algorithm and data agnostic case (Dasgupta, 2005). However, there are data dependent results showing that it is indeed possible to obtain a query strategy which has better sample complexity than querying all points. These results either use assumptions about data-dependent realizability of the hypothesis space like (Gonen et al., 2013) or a data dependent measure of the concept space called disagreement coefï¬cient (Hanneke, 2007). It is also possible to perform active learning in a batch setting using the greedy algorithm via importance sampling (Ganti & Gray, 2012). Although the aforementioned algorithms enjoy theoretical guarantees, they do not apply to large-scale problems.
Core-Set Selection The closest literature to our work is the problem of core-set selection since we deï¬ne active learning as a core-set selection problem. This problem considers a fully labeled dataset and tries to choose a subset of it such that the model trained on the selected subset will perform as closely as possible to the model trained on the entire dataset. For speciï¬c learning algorithms, there are methods like core-sets for SVM (Tsang et al., 2005) and core-sets for k-Means and k-Medians (Har-Peled & Kushal, 2005). However, we are not aware of such a method for CNNs. | 1708.00489#10 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 11 | The most similar algorithm to ours is the unsupervised subset selection algorithm in (Wei et al., 2013). It uses a facility location problem to ï¬nd a diverse cover for the dataset. Our algorithm differs in that it uses a slightly different formulation of facility location problem. Instead of the min-sum, we use the minimax (Wolf, 2011) form. More importantly, we apply this algorithm for the ï¬rst time to the problem of active learning and provide theoretical guarantees for CNNs.
Weakly-Supervised Deep Learning Our paper is also related to semi-supervised deep learning since we experiment the active learning both in the fully-supervised and weakly-supervised scheme. One of the early weakly-supervised convolutional neural network algorithms was Ladder networks (Rasmus et al., 2015). Recently, we have seen adversarial methods which can learn a data distribution as a result of a two-player non-cooperative game (Salimans et al., 2016; Goodfellow et al., 2014; Radford et al., 2015). These methods are further extended to feature learning (Dumoulin et al., 2016; Donahue et al., 2016). We use Ladder networks in our experiments; however, our method is agnostic to the weakly-supervised learning algorithm choice and can utilize any model.
# 3 PROBLEM DEFINITION | 1708.00489#11 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 12 | # 3 PROBLEM DEFINITION
In this section, we formally deï¬ne the problem of active learning in the batch setting and set up the notation for the rest of the paper. We are interested in a C class classiï¬cation problem deï¬ned over a compact space X and a label space Y = {1, . . . , C}. We also consider a loss function l(·, ·; w) : X à Y â R parametrized over the hypothesis class (w), e.g. parameters of the deep learning algorithm. We further assume class-speciï¬c regression functions ηc(x) = p(y = c|x) to be λη-Lipschitz continuous for all c.
We consider a large collection of data points which are sampled i.i.d. over the space Z = X Ã Y as {xi, yi}iâ[n] â¼ pZ where [n] = {1, . . . , n}. We further consider an initial pool of data-points chosen uniformly at random as s0 = {s0(j) â [n]}jâ[m]. | 1708.00489#12 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 13 | An active learning algorithm only has access to {xi}iâ[n] and {ys(j)}jâ[m]. In other words, it can only see the labels of the points in the initial sub-sampled pool. It is also given a budget b of queries
3
Published as a conference paper at ICLR 2018
to ask an oracle, and a learning algorithm As which outputs a set of parameters w given a labelled set s. The active learning with a pool problem can simply be deï¬ned as
min s1:|s1|â¤b Ex,yâ¼pZ [l(x, y; As0âªs1 )] (1)
In other words, an active learning algorithm can choose b extra points and get them labelled by an oracle to minimize the future expected loss. There are a few differences between our formulation and the classical deï¬nition of active learning. Classical methods consider the case in which the budget is 1 (b = 1) but a single point has negligible effect in a deep learning regime hence we consider the batch case. It is also very common to consider multiple rounds of this game. We also follow the multiple round formulation with a myopic approach by solving the single round of labelling as; | 1708.00489#13 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 14 | min sk+1:|sk+1|â¤b Ex,yâ¼pZ [l(x, y; As0âª...sk+1)] (2)
We only discuss the ï¬rst iteration where k = 0 for brevity although we apply it over multiple rounds.
At each iteration, an active learning algorithm has two stages: 1. identifying a set of data-points and presenting them to an oracle to be labelled, and 2. training a classiï¬er using both the new and the previously labeled data-points. The second stage (training the classiï¬er) can be done in a fully or weakly-supervised manner. Fully-supervised is the case where training the classiï¬er is done using only the labeled data-points. Weakly-supervised is the case where training also utilizes the points which are not labelled yet. Although the existing literature only focuses on the active learning for fully-supervised models, we consider both cases and experiment on both.
# 4 METHOD
4.1 ACTIVE LEARNING AS A SET COVER | 1708.00489#14 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 15 | # 4 METHOD
4.1 ACTIVE LEARNING AS A SET COVER
In the classical active learning setting, the algorithm acquires labels one by one by querying an oracle (i.e. b = 1). Unfortunately, this is not feasible when training CNNs since i) a single point will not have a statistically signiï¬cant impact on the model due to the local optimization algorithms. ii) it is infeasible to train as many models as number of points since many practical problem of interest is very large-scale. Hence, we focus on the batch active learning problem in which the active learning algorithm choose a moderately large set of points to be labelled by an oracle at each iteration.
In order to design an active learning strategy which is effective in batch setting, we consider the following upper bound of the active learning loss we formally deï¬ned in (1):
1 Ex.y~pz U(X, ys As)] < | Bx.y~pz [U(x ys As)] â n S U(xi,yis As) ie[n] 1 ty ee wi As) Jjés Generalization Error Training Error 1 1 +)= D0 lexi ues As) - Fg] oe es As), ie[n] jes
# Core-Set Loss | 1708.00489#15 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 16 | # Core-Set Loss
The quantity we are interested in is the population risk of the model learned using a small labelled subset (s). The population risk is controlled by the training error of the model on the labelled subset, the generalization error over the full dataset ([n]) and a term we deï¬ne as the core-set loss. Core-set loss is simply the difference between average empirical loss over the set of points which have labels for and the average empirical loss over the entire dataset including unlabelled points. Empirically, it is widely observed that the CNNs are highly expressive leading to very low training error and they typically generalize well for various visual problems. Moreover, generalization error of CNNs is also theoretically studied and shown to be bounded by Xu & Mannor (2012). Hence, the critical part for active learning is the core-set loss. Following this observation, we re-deï¬ne the active learning problem as:
1 1 min |â U(x;, yi; Asous! ) â ââââ st:js!|<b|n- (xi, yes Asdust) |s° + s}| iâ¬[n] Yo U;, yj Asus) (4) jes°Us!
4
(3)
Published as a conference paper at ICLR 2018 | 1708.00489#16 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
1708.00489 | 17 | Figure 1: Visualization of the Theorem [I] Consider the set of selected points s and the points in the remainder of the dataset [n| \ s, our results shows that if s is the ds cover of the dataset, ip Dieln| (Kis Yer As) â Fay Dyes (Kj, Uys As)] SO (6s) +O (V2)
Informally, given the initial labelled set (s0) and the budget (b), we are trying to ï¬nd a set of points to query labels (s1) such that when we learn a model, the performance of the model on the labelled subset and that on the whole dataset will be as close as possible.
4.2 CORE-SETS FOR CNNS
The optimization objective we deï¬ne in (4) is not directly computable since we do not have access to all the labels (i.e. [n] \ (s0 ⪠s1) is unlabelled). Hence, in this section we give an upper bound for this objective function which we can optimize. | 1708.00489#17 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | [
{
"id": "1605.09782"
},
{
"id": "1603.04467"
},
{
"id": "1703.02910"
},
{
"id": "1606.00704"
},
{
"id": "1511.06434"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.