doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.09782 | 28 | Latent regressor We consider an alternative encoder training by minimizing a reconstruction loss L(z, E(G(z))), after or jointly during a regular GAN training, called latent regressor or joint latent regressor respectively. We use a sigmoid cross entropy loss L as it naturally maps to a uniformly distributed output space. Intuitively, a drawback of this approach is that, unlike the encoder in a BiGAN, the latent regressor encoder E is trained only on generated samples G(z), and never âseesâ real data x â¼ pX. While this may not be an issue in the theoretical optimum where pG(x) = pX(x) exactly â i.e., G perfectly generates the data distribution pX â in practice, for highly complex data distributions pX, such as the distribution of natural images, the generator will almost never achieve this perfect result. The fact that the real data x are never input to this type of encoder limits its utility as a feature representation for related tasks, as shown later in this section.
4.2 PERMUTATION-INVARIANT MNIST | 1605.09782#28 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 29 | 4.2 PERMUTATION-INVARIANT MNIST
We ï¬rst present results on permutation-invariant MNIST (LeCun et al., 1998). In the permutation- invariant setting, each 28Ã28 digit image must be treated as an unstructured 784D vector (Goodfellow et al., 2013). In our case, this condition is met by designing each module as a multi-layer perceptron (MLP), agnostic to the underlying spatial structure in the data (as opposed to a convnet, for example). See Appendix C.1 for more architectural and training details. We set the latent distribution pZ = [U(â1, 1)]50 â a 50D continuous uniform distribution. | 1605.09782#29 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 30 | Table[I]compares the encoding learned by a BiGAN-trained encoder F with the baselines described in Section]. I] as well as autoencoders trained directly to minimize either £2 or £; reconstruction error. The same architecture and optimization algorithm is used across all methods. All methods, including BiGAN, perform at roughly the same level. This result is not overly surprising given the relative simplicity of MNIST digits. For example, digits generated by G ina GAN nearly perfectly match the data distribution (qualitatively), making the latent regressor (LR) baseline method a reasonable choice, as argued in Section/4.1] Qualitative results are presented in Figure[2]
4.3 IMAGENET
Next, we present results from training BiGANs on ImageNet LSVRC (Russakovsky et al., 2015), a large-scale database of natural images. GANs trained on ImageNet cannot perfectly reconstruct
7
Published as a conference paper at ICLR 2017
D E Noroozi & Favaro (2016) G AlexNet-based D Krizhevsky et al. (2012) | 1605.09782#30 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 31 | Figure 3: The convolutional ï¬lters learned by the three modules (D, G, and E) of a BiGAN (left, top-middle) trained on the ImageNet (Russakovsky et al., 2015) database. We compare with the ï¬lters learned by a discriminator D trained with the same architecture (bottom-middle), as well as the ï¬lters reported by Noroozi & Favaro (2016), and by Krizhevsky et al. (2012) for fully supervised ImageNet training (right).
G(z) x G(E(x)) x G(E(x)) x G(E(x)) | 1605.09782#31 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 32 | Figure 4: Qualitative results for ImageNet BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)).
the data, but often capture some interesting aspects. Here, each of D, G, and E is a convnet. In all experiments, the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬fth and last convolution layer (conv5). We also experiment with an AlexNet-based discriminator D as a baseline feature learning approach. We set the latent distribution pZ = [U(â1, 1)]200 â a 200D continuous uniform distribution. Additionally, we experiment with higher resolution encoder input images â 112 à 112 rather than the 64 à 64 used elsewhere â using the generalization described in Section 3.5. See Appendix C.2 for more architectural and training details. | 1605.09782#32 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 33 | Qualitative results The convolutional ï¬lters learned by each of the three modules are shown in Figure 3. We see that the ï¬lters learned by the encoder E have clear Gabor-like structure, similar to those originally reported for the fully supervised AlexNet model (Krizhevsky et al., 2012). The ï¬lters also have similar âgroupingâ structure where one half (the bottom half, in this case) is more color sensitive, and the other half is more edge sensitive. (This separation of the ï¬lters occurs due to the AlexNet architecture maintaining two separate ï¬lter paths for computational efï¬ciency.)
In Figure 4 we present sample generations G(z), as well as real data samples x and their BiGAN re- constructions G(E(x)). The reconstructions, while certainly imperfect, demonstrate empirically that
8
Published as a conference paper at ICLR 2017 | 1605.09782#33 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 35 | Table 2: Classiï¬cation accuracy (%) for the ImageNet LSVRC (Russakovsky et al., 2015) validation set with various portions of the network frozen, or reinitialized and trained from scratch, following the evaluation from Noroozi & Favaro (2016). In, e.g., the conv3 column, the ï¬rst three layers â conv1 through conv3 â are transferred and frozen, and the last layers â conv4, conv5, and fully connected layers â are reinitialized and trained fully supervised for ImageNet classiï¬cation. BiGAN is competitive with these contemporary visual feature learning methods, despite its generality. (*Results from Noroozi & Favaro (2016) are not directly comparable to those of the other methods as a different base convnet architecture with larger intermediate feature maps is used.)
the BiGAN encoder E and generator G learn approximate inverse mappings, as shown theoretically in Theorem 2. In Appendix C.2, we present nearest neighbors in the BiGAN learned feature space.
ImageNet classiï¬cation Following Noroozi & Favaro (2016), we evaluate by freezing the ï¬rst N layers of our pretrained network and randomly reinitializing and training the remainder fully supervised for ImageNet classiï¬cation. Results are reported in Table 2. | 1605.09782#35 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 36 | VOC classiï¬cation, detection, and segmentation We evaluate the transferability of BiGAN rep- resentations to the PASCAL VOC (Everingham et al., 2014) computer vision benchmark tasks, including classiï¬cation, object detection, and semantic segmentation. The classiï¬cation task involves simple binary prediction of presence or absence in a given image for each of 20 object categories. The object detection and semantic segmentation tasks go a step further by requiring the objects to be localized, with semantic segmentation requiring this at the ï¬nest scale: pixelwise prediction of object identity. For detection, the pretrained model is used as the initialization for Fast R-CNN (Gir- shick, 2015) (FRCN) training; and for semantic segmentation, the model is used as the initialization for Fully Convolutional Network (Long et al., 2015) (FCN) training, in each case replacing the AlexNet (Krizhevsky et al., 2012) model trained fully supervised for ImageNet classiï¬cation. We report results on each of these tasks in Table 3, comparing BiGANs with contemporary approaches to unsupervised (Krähenbühl et al., | 1605.09782#36 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 38 | # 4.4 DISCUSSION
Despite making no assumptions about the underlying structure of the data, the BiGAN unsupervised feature learning framework offers a representation competitive with existing self-supervised and even weakly supervised feature learning approaches for visual feature learning, while still being a purely generative model with the ability to sample data x and predict latent representation z. Furthermore, BiGANs outperform the discriminator (D) and latent regressor (LR) baselines discussed in Section 4.1, conï¬rming our intuition that these approaches may not perform well in the regime of highly complex data distributions such as that of natural images. The version in which the encoder takes a higher resolution image than output by the generator (BiGAN 112 à 112 E) performs better still, and this strategy is not possible under the LR and D baselines as each of those modules take generator outputs as their input.
Although existing self-supervised approaches have shown impressive performance and thus far tended to outshine purely unsupervised approaches in the complex domain of high-resolution images, purely unsupervised approaches to feature learning or pre-training have several potential beneï¬ts.
9
Published as a conference paper at ICLR 2017 | 1605.09782#38 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 39 | 9
Published as a conference paper at ICLR 2017
FRCN FCN Classification Detection Segmentation (% mAP) (% mAP) (% mlIU) trained layers fe8 = fc6-8 ~â all all all ImageNet (Krizhevsky et al. 770 788 78.3 56.8 48.0 [Agrawal et al.|(2015) 31.2 31.0 54.2 43.9 - self-su âathe a i20T6) 30.5 34.6 56.5 44.5 30.0 sem sup. 28.4 55.6 63.1 474 - Doers al. 44.7 55.1 65.3 51.1 - k-means 32.0 39.2 566 45.6 32.6 Discriminator (D 30.7 40.5 564 - - Latent Regressor (LR) 36.9 47.9 57.1 - - unsup. Joint LR 37.1 47.9 56.5 - - Autoencoder (¢2) 24.8 160 53.8 41.9 - BiGAN (ours) 37.5 48.7 58.9 46.2 34.9 BiGAN, 112 x 112 E (ours) 41.7 525 60.3 46.9 35.2 | 1605.09782#39 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 40 | Table 3: Classiï¬cation and Fast R-CNN (Girshick, 2015) detection results for the PASCAL VOC 2007 (Everingham et al., 2014) test set, and FCN (Long et al., 2015) segmentation results on the PASCAL VOC 2012 validation set, under the standard mean average precision (mAP) or mean intersection over union (mIU) metrics for each task. Classiï¬cation models are trained with various portions of the AlexNet (Krizhevsky et al., 2012) model frozen. In the fc8 column, only the linear classiï¬er (a multinomial logistic regression) is learned â in the case of BiGAN, on top of randomly initialized fully connected (FC) layers fc6 and fc7. In the fc6-8 column, all three FC layers are trained fully supervised with all convolution layers frozen. Finally, in the all column, the entire network is âï¬ne-tunedâ. BiGAN outperforms other unsupervised (unsup.) feature learning approaches, including the GAN-based baselines described in Section 4.1, and despite its generality, is competitive with contemporary self-supervised (self-sup.) feature learning approaches speciï¬c to the visual domain. | 1605.09782#40 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 41 | BiGAN and other unsupervised learning approaches are agnostic to the domain of the data. The self-supervised approaches are speciï¬c to the visual domain, in some cases requiring weak super- vision from video unavailable in images alone. For example, the methods are not applicable in the permutation-invariant MNIST setting explored in Section 4.2, as the data are treated as ï¬at vectors rather than 2D images.
Furthermore, BiGAN and other unsupervised approaches neednât suffer from domain shift between the pre-training task and the transfer task, unlike self-supervised methods in which some aspect of the data is normally removed or corrupted in order to create a non-trivial prediction task. In the context prediction task (Doersch et al., 2015), the network sees only small image patches â the global image structure is unobserved. In the context encoder or inpainting task (Pathak et al., 2016), each image is corrupted by removing large areas to be ï¬lled in by the prediction network, creating inputs with dramatically different appearance from the uncorrupted natural images seen in the transfer tasks. | 1605.09782#41 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 42 | Other approaches (Agrawal et al., 2015; Wang & Gupta, 2015) rely on auxiliary information un- available in the static image domain, such as video, egomotion, or tracking. Unlike BiGAN, such approaches cannot learn feature representations from unlabeled static images.
We ï¬nally note that the results presented here constitute only a preliminary exploration of the space of model architectures possible under the BiGAN framework, and we expect results to improve sig- niï¬cantly with advancements in generative image models and discriminative convolutional networks alike.
# ACKNOWLEDGMENTS
The authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artiï¬cial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA.
10
Published as a conference paper at ICLR 2017
# REFERENCES
Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In ICCV, 2015.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015. | 1605.09782#42 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 43 | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015.
Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016.
Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes challenge: A retrospective. IJCV, 2014.
Ross Girshick. Fast R-CNN. In ICCV, 2015. | 1605.09782#43 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 44 | Ross Girshick. Fast R-CNN. In ICCV, 2015.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
Geoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. | 1605.09782#44 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 45 | Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Philipp Krähenbühl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. In ICLR, 2016.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classiï¬cation with deep convolu- tional neural networks. In NIPS, 2012.
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 1998.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. | 1605.09782#45 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 46 | Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectiï¬er nonlinearities improve neural network acoustic models. In ICML, 2013.
11
Published as a conference paper at ICLR 2017
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
Ali Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: an astounding baseline for recognition. In CVPR Workshops, 2014. | 1605.09782#46 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 47 | Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet large scale visual recognition challenge. IJCV, 2015.
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688, 2016.
Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. Grammar as a foreign language. In NIPS, 2015.
Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
12
Published as a conference paper at ICLR 2017
# APPENDIX A ADDITIONAL PROOFS | 1605.09782#47 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 48 | 12
Published as a conference paper at ICLR 2017
# APPENDIX A ADDITIONAL PROOFS
A.1 PROOF OF PROPOSITION 1 (OPTIMAL DISCRIMINATOR)
Proposition 1 For any E and G, the optimal discriminator Digg := arg max p V (D, E,G) is the dPex. : Q ++ [0,1] of measure Pex with respect to Radon-Nikodym derivative fig := Pax tPoz) measure Pex + Pez.
Proof. For measures P and Q on space â¦, with P absolutely continuous with respect to Q, the RN derivative fP Q := dP
exists, and we have [9(x)] = fog dP = Jo
Ex~p [9(x)] = fog dP = Jo 956 1Q = Jo gfrg dQ = Exxg [frq(x)g(x)]- (4)
Let the probability measure PEG := PEX+PGZ denote the average of measures PEX and PGZ. Both PEX and PGZ are each absolutely continuous with respect to PEG. Hence the RN derivatives fEG := | 1605.09782#48 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 49 | fEG + fGE = dPEX d(PEX+PGZ) + d(PEX+PGZ) = d(PEX+PGZ) dPGZ d(PEX+PGZ) = 1. (5)
We use (4) and (5) to rewrite the objective V (3) as a single expectation under measure PEG:
V(D,E, G) = E(x,2)~Pex [log D(x, z)| + E(x2)~Poz [log (1â D(x, z))] = Eqz)~Pre faa (x, 2) log D(x, 2)] + Ex,2)~Pec l2fou(x, 2) log (1 â D(x, z))] Se Sa dPpx dPez dPra dPgc = 2E(,2)~Pro [fec(x,2) log D(x, 2) + faz(x, 2) log (1 â D(x, z))] = 2E(x,2)~Pre [fea(x, z) log D(x, z) + (1 â fea(x,z)) log (1 â D(x, z))]. | 1605.09782#49 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 50 | Note that arg maxy {a log y + (1 â a) log(1 â y)} = a for any a â [0, 1]. Thus, Dâ
# Dig = fea.
A.2 PROOF OF PROPOSITION 2 (ENCODER AND GENERATOR OBJECTIVE)
Proposition 2 The encoder and generatorâs objective for an optimal discriminator C(E, G) := maxD V (D, E, G) = V (Dâ EG, E, G) can be rewritten in terms of the Jensen-Shannon divergence between measures PEX and PGZ as C(E, G) = 2 DJS ( PEX || PGZ ) â log 4.
Proof. Using Proposition 1 along with (5) (1 â Dâ EG = 1 â fEG = fGE) we rewrite the objective | 1605.09782#50 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 51 | Proof. Using Proposition 1 along with (5) (1 â Dâ EG = 1 â fEG = fGE) we rewrite the objective
C(E,G) = maxpV(D, E,G) = V (Dig, E.G) = Ex,2)~Prx [log Ding (x, 2)] + Eqx,2)~Pez [log (1 â Digg (x, 2))] = Ex,2)~Prx [log fec(x, Z)] + Eqx,2)~Pez [log fan(x, 2) = Eexz)~Ppx [log (2fea(x, Z))] + Ex.z)~Pez [log (2fen(x,z))] â log 4 = Dut (Pex || Pea) + Dix (Paz || Pea) â log 4 = Dy (Pox || 22422) + Dax. (Po || 42) â log =2Djs (Pex l| Paz) âlog4.
# A.3 MEASURE DEFINITIONS FOR DETERMINISTIC E AND G | 1605.09782#51 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 52 | # A.3 MEASURE DEFINITIONS FOR DETERMINISTIC E AND G
While Theorem 1 and Propositions 1 and 2 hold for any encoder pE(z|x) and generator pG(x|z), stochastic or deterministic, Theorems 2 and 3 assume the encoder E and generator G are deterministic functions; i.e., with conditionals pE(z|x) = δ(z â E(x)) and pG(x|z) = δ(x â G(z)) deï¬ned as δ functions.
13
Published as a conference paper at ICLR 2017
For use in the proofs of those theorems, we simplify the deï¬nitions of measures PEX and PGZ given in Section 3 for the case of deterministic functions E and G below: | 1605.09782#52 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 53 | Pex(R) = Joy Px(®) fog PE(ZzIX)Ux,2)eR) dz dx = Joye PX(®) (Jog (2 â E(®))1oc2yen| a2) ax = Jog PX) 1x, 2o0) ER] EX Paz(R) = Jo, P2(2) Jog PG(XI2)1 [(,2) eR] Ix dz = Jog P22) (Jog 5(X â G2) )Upx2)en) 4x) da = Jo, pal2)1a(z)z)er) de
A.4 PROOF OF THEOREM 2 (OPTIMAL GENERATOR AND ENCODER ARE INVERSES)
Theorem 2 If E and G are an optimal encoder and generator, then E = Gâ1 almost everywhere; that is, G(E(x)) = x for PX-almost every x â â¦X, and E(G(z)) = z for PZ-almost every z â â¦Z. | 1605.09782#53 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 54 | Proof. Let R0 x = G(E(x)) does not hold. We will show that, for optimal E and G, R0 PX (i.e., PX(R0 Let R0 := {(x, z) â ⦠: z = E(x) â§ x â R0 only if x â R0 and the fact that PEX = PGZ for optimal E and G (Theorem 1).
Proof. Let R& := {x ⬠Nx : x # G(E(x))} be the region of Qx in which the inversion property x = G(E(x)) does not hold. We will show that, for optimal E and G, R& has measure zero under Px (i.e., Px (RX) = 0) and therefore x = G(E(x)) holds Px-almost everywhere.
X} be the region of ⦠such that (x, E(x)) â R0 if and X. Weâll use the deï¬nitions of PEX and PGZ for deterministic E and G (Appendix A.3), | 1605.09782#54 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 55 | = Jox Px(X)1 erg dx = fay PXC)1(x,B60) eR] 4 = Ppx(R°) = Poz(R°) = Jo, P2(Z)1a(2) 2)eR0] I = Jor p2(2)1 (2 2(G(2)) AG(2) ERS] dz = Joe pz (2) 1, =0 for any z, as z= E(G(z)) => G(z)=G(B(G(z))) E(G(2)) A G(2)#G(B(G(2)))] dz =0.
# PX(R0
# Px(RX)
= 0.
X has measure zero (PX(R0 X) = 0), and the inversion property x = G(E(x)) holds
Hence region R0 PX-almost everywhere. An analogous argument shows that R0 PZ(R0
An analogous argument shows that RZ := {z ⬠Qz :z # E(G(z))} has measure zero on Pz (ie., Pz(Rz) = 0) and therefore z = E(G(z)) holds Pz-almost everywhere. | 1605.09782#55 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 56 | # A.5 PROOF OF THEOREM 3 (RELATIONSHIP TO AUTOENCODERS)
As shown in Proposition 2 (Section 3), the BiGAN objective is equivalent to the Jensen-Shannon divergence between PEX and PGZ. We now go a step further and show that this Jensen-Shannon divergence is closely related to a standard autoencoder loss. Omitting the 1 2 scale factor, a KL divergence term of the Jensen-Shannon divergence is given as
dP; Der (P; Pex+Pez) â Joe 2 4 [ 1 EX ap xu (Pex || pt) = log Q °8 d(Pex + Pox) ** = log2+ [ log f dPex, (6) 2
where we abbreviate as f the Radon-Nikodym derivative fEG := Proposition 1 for most of this proof. dPEX d(PEX+PGZ) â [0, 1] deï¬ned in
14
Published as a conference paper at ICLR 2017
Weâll make use of the deï¬nitions of PEX and PGZ for deterministic E and G found in Appendix A.3. The integral term of the KL divergence expression given in (6) over a particular region R â ⦠will be denoted by | 1605.09782#56 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 57 | F (R) := R log dPEX d (PEX + PGZ) dPEX = R log f dPEX.
Next we will show that f > 0 holds PEX-almost everywhere, and hence F is always well deï¬ned and ï¬nite. We then show that F is equivalent to an autoencoder-like reconstruction loss function.
Proposition 3 f > 0 PEX-almost everywhere.
Proof. Let RS=° := {(x,z) ⬠Q: f(x,z) = 0} be the region of Q in which f = 0. Using the definition of the Radon-Nikodym derivative f, the measure Prx(R! =) = fi pi-o f A(Pex + Paz) = Jnr-o 0d(Pex + Paz) = Vis zero. Hence f > 0 Pzx-almost everywhere. Proposition[3]ensures that log f is defined Pex-almost everywhere, and F'(R) is well-defined. Next we will show that F(R) mimics an autoencoder with @p loss, meaning F is zero for any region in which G(E(x)) 4 x, and non-zero otherwise.
Proposition 4 The KL divergence F outside the support of PGZ is zero: F (⦠\ supp(PGZ)) = 0. | 1605.09782#57 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 58 | Proposition 4 The KL divergence F outside the support of PGZ is zero: F (⦠\ supp(PGZ)) = 0.
Weâll first show that in region Rs := 2 \ supp(Pez), we have f = 1 Pex-almost everywhere. Let R/<! := {(x,z) ⬠Rg: f(x,z) < 1} be the region of Rg in which f < 1. Letâs assume that Prx(R! <1) > 0 has non-zero measure. Then, using the definition of the Radon-Nikodym derivative, Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â¬Pex(R!s?)
Pex(RIS") = fraser f U(Pex + Pox) = Sera f dPax + Sara f dPaw < â¬Pex(R!s?) â~~ ee <e<l 0
< PEX(Rf <1), | 1605.09782#58 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 59 | < PEX(Rf <1),
where ¢ is a constant smaller than 1. But Pex(R/<!) < Pex(R/<') is a contradiction; hence Prex(Rf<!) = 0 and f = 1 Pex-almost everywhere in Rs, implying log f = 0 Pex-almost everywhere in Rg. Hence F(Rs) = 0. By definition, F(Q \ supp(Pzx)) = 0 is also zero. The only region where F' might be non-zero is R! := supp(Pex) Nsupp(Paz).
Proposition 5 f < 1 PEX-almost everywhere in R1.
Let Rf! := {(x, z) ER: f(x,z)= 1} be the region in which f = 1. Letâs assume the set RS=" F is not empty. By definition of the support] Prx(Rf=1) > 0 and Pez(R/=!) > 0. The Radon-Nikodym derivative on R=" is then given by Ppx(RIâ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=),
# Spin f Pex + Pox) = | 1605.09782#59 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 60 | # Spin f Pex + Pox) =
Ppx(RIâ') = Spin f Pex + Pox) = Spr 1d(Pex + Pez) = Pux(RI) + Pao(R!=), which implies Pez(R/=!) = 0 and contradicts the definition of support. Hence R/=! = ( and f <1 Prx-almost everywhere on R', implying log f < 0 Pex-almost everywhere.
Theorem 3 The encoder and generator objective given an optimal discriminator C(E,G) := maxp V(D, E,G) can be rewritten as an ⬠autoencoder loss function
C(E, G) = Exxpx [1 peopeencteea)=x] log fra(x, E(x))| + Eapz [tate ecinen (Cte) =e] log (1 â fra(G(2),2))|
with log fEG â (ââ, 0) and log (1 â fEG) â (ââ, 0) PEX-almost and PGZ-almost everywhere. | 1605.09782#60 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 61 | with log fEG â (ââ, 0) and log (1 â fEG) â (ââ, 0) PEX-almost and PGZ-almost everywhere.
Proof. Proposition 4 (F (⦠\ supp(PGZ)) = 0) and F (⦠\ supp(PEX)) = 0 imply that R1 := supp(PEX) â© supp(PGZ) is the only region of ⦠where F may be non-zero; hence F (â¦) = F (R1).
"We use the definition UNC #0 => p(UNC) > Ohere.
15
Published as a conference paper at ICLR 2017
Note that
supp(PEX) = {(x, E(x)) : x â Ëâ¦X} supp(PGZ) = {(G(z), z) : z â Ëâ¦Z}
=â R1 := supp(PEX) â© supp(PGZ) = {(x, z) : E(x) = z â§ x â Ëâ¦X â§ G(z) = x â§ z â Ëâ¦Z} | 1605.09782#61 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 62 | So a point (x, E(x)) is in R1 if x â Ëâ¦X, E(x) â Ëâ¦Z, and G(E(x)) = x. (We can omit the x â Ëâ¦X condition from inside an expectation over PX, as PX-almost all x /â Ëâ¦X have 0 probability.) Therefore,
Dut (Pex || 72*$"e2) â log2 = F(Q) = F(Râ) = Sra log f(x, z) dPex = fo lezyers log f(x, 2) dPax = Eq2)~Pex [L[(x.2)eR1] log f(x, 2)] = Exxpx [1Ge2())eR') log f(x, E(x))] = Exxpx [tee retenc(eooy=x] log f(x, E(x))|
Finally, with Propositions 3 and 5, we have f â (0, 1) PEX-almost everywhere in R1, and therefore log f â (ââ, 0), taking a ï¬nite and strictly negative value PEX-almost everywhere. | 1605.09782#62 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 63 | An analogous argument (along with the fact that fEG + fGE = 1) lets us rewrite the other KL divergence term
Dux (Paz || 788 $"84 ) â log 2 = Exnpe [feta eirenz(orn))=2] log far(G(z), 2)| = Exxpz [feta eirenz(orn))=2] log (1 â fra(G(2),2))|
# DKL
The Jensen-Shannon divergence is the mean of these two KL divergences, giving C(E, G):
C(E,G) = 2Djs (Pex || Paz) â log 4 = Dxi (Pex || Pext Pon ) + Dkr (Pez || PextPoz ) âlog4 = Exxpx [2 pemetenc era )=x] log fra(x, B(x))| + Ex~pz [1,etaetrxass(ate))=3 log (1 â fec(G(z), z))|
# APPENDIX B LEARNING DETAILS | 1605.09782#63 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 64 | # APPENDIX B LEARNING DETAILS
In this section we provide additional details on the BiGAN learning protocol summarized in Sec- tion 3.4. Goodfellow et al. (2014) found for GAN training that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an âinverseâ objective Î (with the same ï¬xed point characteristics as V ) provides stronger gradient signal to G and E, where
A(D, G, E) = Exnpx [| Ez~pp(-|x) [log (1 â D(x, 2))] ] + Exepz [ Exxpe(-|z) log D(x, z)]]. ââ.ââ EE ae log(1â D(x, E(x))) log D(G(z),z) | 1605.09782#64 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 65 | In practice, θG and θE are updated by moving in the positive gradient direction of this inverse objective âθE ,θGÎ, rather than the negative gradient direction of the original objective. We also observed that learning behaved similarly when all parameters θD, θG, θE were updated simultaneously at each iteration rather than alternating between θD updates and θG, θE updates, so we took the simultaneous updating (non-alternating) approach for computational efï¬ciency. (For standard GAN training, simultaneous updates of θD, θG performed similarly well, so our standard GAN experiments also follow this protocol.)
16
.
Published as a conference paper at ICLR 2017
# APPENDIX C MODEL AND TRAINING DETAILS
In the following sections we present additional details on the models and training protocols used in the permutation-invariant MNIST and ImageNet evaluations presented in Section 4. | 1605.09782#65 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 66 | In the following sections we present additional details on the models and training protocols used in the permutation-invariant MNIST and ImageNet evaluations presented in Section 4.
Optimization For unsupervised training of BiGANs and baseline methods, we use the Adam optimizer to compute parameter updates, following the hyperparameters (initial step size a = 2 x 10~*, momentum f, = 0.5 and 8 = 0.999) used by [Radford et al 2016). The step size a is decayed exponentially to a = 2 x 10~° starting halfway through training. The mini-batch size is 128. â¬2 weight decay of 2.5 x 10~° is applied to all multiplicative weights in linear layers (but not to the learned bias { or scale 7 parameters applied after batch normalization). Weights are initialized from a zero-mean normal distribution with a standard deviation of 0.02, with one notable exception: BiGAN discriminator weights that directly multiply z inputs to be added to spatial convolution outputs have initializations scaled by the convolution kernel size âe.g., fora 5 x 5 kernel, weights are initialized with a standard deviation of 0.5, 25 times the standard initialization. | 1605.09782#66 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 67 | Software & hardware We implement BiGANs and baseline feature learning methods using the Theano (Theano Development Team, 2016) framework, based on the convolutional GAN implemen- tation provided by Radford et al. (2016). ImageNet transfer learning experiments (Section 4.3) use the Caffe (Jia et al., 2014) framework, per the Fast R-CNN (Girshick, 2015) and FCN (Long et al., 2015) reference implementations. Most computation is performed on an NVIDIA Titan X or Tesla K40 GPU.
C.1 PERMUTATION-INVARIANT MNIST
In all permutation-invariant MNIST experiments (Section 4.2), D, G, and E each consist of two hidden layers with 1024 units. The ï¬rst hidden layer is followed by a non-linearity; the second is followed by (parameter-free) batch normalization (Ioffe & Szegedy, 2015) and a non-linearity. The second hidden layer in each case is the input to a linear prediction layer of the appropriate size. In D and E, a leaky ReLU (Maas et al., 2013) non-linearity with a âleakâ of 0.2 is used; in G, a standard ReLU non-linearity is used. All models are trained for 400 epochs. | 1605.09782#67 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 68 | C.2 IMAGENET
In all ImageNet experiments (Section 4.3), the encoder E architecture follows AlexNet (Krizhevsky et al., 2012) through the ï¬fth and last convolution layer (conv5), with local response normalization (LRN) layers removed and batch normalization (Ioffe & Szegedy, 2015) (including the learned scaling and bias) with leaky ReLU non-linearity applied to the output of each convolution at unsupervised training time. (For supervised evaluation, batch normalization is not used, and the pre-trained scale and bias is merged into the preceding convolutionâs weights and bias.)
In most experiments, both the discriminator D and generator G architecture are those used by Radford et al. (2016), consisting of a series of four 5 à 5 convolutions (or âdeconvolutionsâ â fractionally- strided convolutions â for the generator G) applied with 2 pixel stride, each followed by batch normalization and rectiï¬ed non-linearity. | 1605.09782#68 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 69 | The sole exception is our discriminator baseline feature learning experiment, in which we let the discriminator D be the AlexNet variant described above. Generally, using AlexNet (or similar convnet architecture) as the discriminator D is detrimental to the visual ï¬delity of the resulting generated images, likely due to the relatively large convolutional ï¬lter kernel size applied to the input image, as well as the max-pooling layers, which explicitly discard information in the input. However, for fair comparison of the discriminatorâs feature learning abilities with those of BiGANs, we use the same architecture as used in the BiGAN encoder.
Preprocessing To produce a data sample x, we ï¬rst sample an image from the database, and resize it proportionally such that its shorter edge has a length of 72 pixels. Then, a 64 à 64 crop is randomly selected from the resized image. The crop is ï¬ipped horizontally with probability 1 2 . Finally, the crop is scaled to [â1, 1], giving the sample x.
17
Published as a conference paper at ICLR 2017
Query #1 #2 #3 #4 | 1605.09782#69 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09782 | 70 | Figure 5: For the query images used in Krähenbühl et al. (2016) (left), nearest neighbors (by minimum cosine distance) from the ImageNet LSVRC (Russakovsky et al., 2015) training set in the fc6 feature space of the ImageNet-trained BiGAN encoder E. (The fc6 weights are set randomly; this space is a random projection of the learned conv5 feature space.)
Timing A single epoch (one training pass over the 1.2 million images) of BiGAN training takes roughly 40 minutes on a Titan X GPU. Models are trained for 100 epochs, for a total training time of under 3 days.
Nearest neighbors encoder E learned in unsupervised ImageNet training. In Figure 5 we present nearest neighbors in the feature space of the BiGAN
18 | 1605.09782#70 | Adversarial Feature Learning | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning. | http://arxiv.org/pdf/1605.09782 | Jeff Donahue, Philipp Krähenbühl, Trevor Darrell | cs.LG, cs.AI, cs.CV, cs.NE, stat.ML | Published as a conference paper at ICLR 2017. Changelog: (v7) Table 2
results improved 1-2% due to averaging predictions over 10 crops at test
time, as done in Noroozi & Favaro; Table 3 VOC classification results
slightly improved due to minor bugfix. (See v6 changelog for previous
versions.) | null | cs.LG | 20160531 | 20170403 | [
{
"id": "1605.02688"
},
{
"id": "1606.00704"
}
] |
1605.09090 | 1 | # Abstract
In this paper, we proposed a sentence encoding-based model for recognizing text en- tailment. In our approach, the encoding of sentence is a two-stage process. Firstly, av- erage pooling was used over word-level bidi- rectional LSTM (biLSTM) to generate a ï¬rst- stage sentence representation. Secondly, at- tention mechanism was employed to replace average pooling on the same sentence for bet- ter representations. Instead of using target sentence to attend words in source sentence, we utilized the sentenceâs ï¬rst-stage represen- tation to attend words appeared in itself, which is called âInner-Attentionâ in our paper . Ex- periments conducted on Stanford Natural Lan- guage Inference (SNLI) Corpus has proved the effectiveness of âInner-Attentionâ mech- anism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.
P The boy is running through a grassy area. The boy is in his room. H A boy is running outside. The boy is in a park. C E N
Table 1: Examples of three types of label in RTE, where P stands for Premises and H stands for Hypothesis
also explored by many researchers, but not been widely used because of its complexity and domain limitations. | 1605.09090#1 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 2 | also explored by many researchers, but not been widely used because of its complexity and domain limitations.
Recently published Stanford Natural Language Inference (SNLI1) corpus makes it possible to use deep learning methods to solve RTE problems. So far proposed deep learning approaches can be roughly categorized into two groups: sentence encoding-based models and matching encoding- based models. As the name implies, the encoding of sentence is the core of former methods, while the lat- ter methods directly model the relation between two sentences and didnât generate sentence representa- tions at all.
# 1 Introduction
Given a pair of sentences, the goal of recogniz- ing text entailment (RTE) is to determine whether the hypothesis can reasonably be inferred from the premises. There were three types of relation in RTE, Entailment (inferred to be true), Contradiction (in- ferred to be false) and Neutral (truth unknown).A few examples were given in Table 1.
Traditional methods to RTE has been the domin- ion of classiï¬ers employing hand engineered fea- tures, which heavily relied on natural language pro- cessing pipelines and external resources. Formal (Bos and Markert, 2005) were reasoning methods | 1605.09090#2 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 3 | In view of universality, we focused our efforts on sentence encoding-based model. Existing methods of this kind including: LSTMs-based model, GRUs- based model, TBCNN-based model and SPINN- based model. Single directional LSTMs and GRUs suffer a weakness of not utilizing the contextual information from the future tokens and Convolu- tional Neural Networks didnât make full use of in- formation contained in word order. Bidirectional LSTM utilizes both the previous and future context by processing the sequence on two directions which helps to address the drawbacks mentioned above. | 1605.09090#3 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 4 | 1http://nlp.stanford.edu/projects/snli/
>< (c) Sentence Matching Multiplication © Mean Pooling (B) Sentence Encoding Mean Pooling (a) Sentence Input Premise immersed pleasant conversation photograph involved headted discussion Canon Hypothesis
Figure 1: Architecture of Bidirectional LSTM model with Inner-Attention
# (Tan et al., 2015)
A recent work by (Rockt¨aschel et al., 2015) im- proved the performance by applying a neural atten- tion model that didnât yield sentence embeddings. | 1605.09090#4 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 5 | A recent work by (Rockt¨aschel et al., 2015) im- proved the performance by applying a neural atten- tion model that didnât yield sentence embeddings.
In this paper, we proposed a uniï¬ed deep learning framework for recognizing textual entailment which dose not require any feature engineering, or external resources. The basic model is based on building biL- STM models on both premises and hypothesis. The basic mean pooling encoder can roughly form a intu- ition about what this sentence is talking about. Ob- tained this representation, we extended this model by utilize an Inner-Attention mechanism on both sides. This mechanism helps generate more accurate and focused sentence representations for classiï¬ca- tion. In addition, we introduced a simple effective input strategy that get ride of same words in hypoth- esis and premise, which further boosts our perfor- mance. Without parameter tuning, we improved the art-of-the-state performance of sentence encoding- based model by nearly 2%.
# 2 Our approach
In our work, we treated RTE task as a supervised three-way classiï¬cation problem. The overall archi- tecture of our model is shown in Figure 1. The de- sign of this model we follow the idea of Siamese Network, that the two identical sentence encoders | 1605.09090#5 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 6 | share the same set of weights during training, and the two sentence representations then combined to- gether to generated a ârelation vectorâ for classiï¬- cation. As we can see from the ï¬gure, the model mainly consists of three parts. From top to bottom were: (A). The sentence input module; (B). The sen- tence encoding module; (C). The sentence matching module. We will explain the last two parts in detail in the following subsection. And the sentence input module will be introduced in Section 3.3.
# 2.1 Sentence Encoding Module
Sentence encoding module is the fundamental part of this model. To generate better sentence repre- sentations, we employed a two-step strategy to en- code sentences. Firstly, average pooling layer was built on top of word-level biLSTMs to produce sen- tence vector. This simple encoder combined with the sentence matching module formed the basic ar- chitecture of our model. With much less parame- ters, this basic model alone can outperformed art-of- state method by a small margin. (refer to Table 3). Secondly, attention mechanism was employed on the same sentence, instead of using target sentence representation to attend words in source sentence, we used the representation generated in previous stage to attend words appeared in the sentence it- self, which results in a similar distribution with other | 1605.09090#6 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 7 | attention mechanism weights. More attention was given to important words.2
The idea of âInner-attentionâ was inspired by the observation that when human read one sentence, people usually can roughly form an intuition about which part of the sentence is more important accord- ing past experience. And we implemented this idea using attention mechanism in our model. The atten- tion mechanism is formalized as follows:
M = tanh(W yY + W hRave â eL) α = sof tmax(wT M ) Ratt = Y αT
where Y is a matrix consisting of output vectors of biLSTM, Rave is the output of mean pooling layer, α denoted the attention vector and Ratt is the attention-weighted sentence representation.
# 2.2 Sentence Matching Module
Once the sentence vectors are generated. Three matching methods were applied to extract relations between premise and hypothesis.
Concatenation of the two representations ⢠Element-wise product ⢠Element-wise difference
This matching architecture was ï¬rst used by (Mou et al., 2015). Finally, we used a SoftMax layer over the output of a non-linear projection of the gen- erated matching vector for classiï¬cation.
# 3 Experiments
# 3.1 DataSet | 1605.09090#7 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 8 | # 3 Experiments
# 3.1 DataSet
To evaluate the performance of our model, we conducted on Stanford corpus Natural Language (Bos and Markert, 2005). At 570K pairs, SNLI is two orders of magnitude larger than all other resources of its type. The dataset is constructed by crowdsourced efforts, each sentence written by humans. labels comprise three classes: Entailment, Contradiction, and Neutral
(Yang et al., 2016) proposed a Hierarchical At- tention model on the task of document classiï¬cation also used for but the target representation in attention their mechanism is randomly initialized.
(two irrelevant sentences). We applied the standard train/validation/test split, containing 550k, 10k, and 10k samples, respectively.
# 3.2 Parameter Setting | 1605.09090#8 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 9 | (two irrelevant sentences). We applied the standard train/validation/test split, containing 550k, 10k, and 10k samples, respectively.
# 3.2 Parameter Setting
The training objective of our model is cross-entropy loss, and we use minibatch SGD with the Rmsprop (Tieleman and Hinton, 2012) for optimization. The batch size is 128. A dropout layer was applied in the output of the network with the dropout rate set to 0.25. In our model, we used pretrained 300D Glove 840B vectors (Pennington et al., 2014) to initialize the word embedding. Out-of-vocabulary words in the training set are randomly initialized by sampling values uniformly from (0.05, 0.05). All of these em- bedding are not updated during training . We didnât tune representations of words for two reasons: 1. To reduced the number of parameters needed to train. 2. Keep their representation stays close to unseen similar words in inference time, which improved the modelâs generation ability. The model is imple- mented using open-source framework Keras.3
# 3.3 The Input Strategy
In this part, we investigated four strategies to modify the input on our basic model which helps us increase performance, the four strategies are: | 1605.09090#9 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 10 | # 3.3 The Input Strategy
In this part, we investigated four strategies to modify the input on our basic model which helps us increase performance, the four strategies are:
Inverting Premises (Sutskever et al., 2014) ⢠Doubling Premises (Zaremba and Sutskever, 2014) ⢠Doubling Hypothesis ⢠Differentiating Inputs (Removing same words
appeared in premises and hypothesis)
Experimental results were illustrated in Table 2. As we can see from it, doubling hypothesis and differentiating inputs both improved our modelâs performance.While the hypothesises usually much shorter than premises, doubling hypothesis may ab- sorb this difference and emphasize the meaning twice via this strategy. Differentiating input strat- egy forces the model to focus on different part of the two sentences which may help the classiï¬cation for Neutral and Contradiction examples as we ob- served that our model tended to assign unconï¬dent instances to Entailment. And the original input sen- tences appeared in Figure 1 are:
Premise: Two man in polo shirts and tan pants im- mersed in a pleasant conversation about photograph. | 1605.09090#10 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 11 | 3http://keras.io/
Test Acc. 83.24% 82.60% 83.66% 82.83% 83.72% Input Strategy Original Sequences Inverting Premises Doubling Premises Doubling Hypothesis Differentiating Inputs
Table 2: Comparison of different input strategies
Hypothesis: Two man in polo shirts and tan pants in- volved in a heated discussion about Canon.
Label: Contradiction While most of the words in this pair of sentences are same or close in semantic, It is hard for model to distinguish the difference between them, which resulted in labeling it with Neutral or Entailment. Through differentiating inputs strategy, this kind of problems can be solved.
3.4 Comparison Methods In this part, we compared our model against the fol- lowing art-of-the-state baseline approaches:
⢠LSTM enc: 100D LSTM encoders + MLP. (Bowman et al., 2015)
⢠GRU enc: 1024D GRU encoders + skip-thoughts + cat, -. (Vendrov et al., 2015)
⢠TBCNN enc: 300D Tree-based CNN encoders + cat, ⦠, -. (Mou et al., 2015)
⢠SPINN enc: 300D SPINN-NP encoders + cat, ⦠, -. (Bowman et al., 2016) | 1605.09090#11 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 12 | ⢠SPINN enc: 300D SPINN-NP encoders + cat, ⦠, -. (Bowman et al., 2016)
⢠Static-Attention: 100D LSTM + static attention. (Rockt¨aschel et al., 2015)
⢠WbW-Attention: 100D LSTM + word-by-word at- tention. (Rockt¨aschel et al., 2015)
The cat refers to concatenation, - and ⦠denote element-wise difference and product, respectively. Much simpler and easy to understand.
# 3.5 Results and Qualitative Analysis
Although the classiï¬cation of RTE example is not solely relying on representations obtained from at- tention, instructive to analysis Inner- Attention mechanism as we witnessed a large per- formance increase after employing it. We hand- picked several examples from the dataset to visual- ize. In order to make the weights more discrimi- nated, we didnât use a uniform colour atla cross sen- tences. That is, each sentence have its own color atla, the lightest color and the darkest color de- noted the smallest attention weight the biggest value | 1605.09090#12 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 13 | Params Test Acc. Model Sentence encoding-based models LSTM enc GRU enc TBCNN enc SPINN enc Basic model + Inner-Attention + Diversing Input Other neural network models Static-Attention WbW-Attention 3.0M 15M 3.5M 3.7M 2.0M 2.8M 2.8M 80.6% 81.4% 82.1% 83.2% 83.3% 84.2% 85.0% 242K 252K 82.4% 83.5%
Table 3: Performance comparison of different models on SNLI.
within the sentence, respectively. Visualizations of Inner-Attention on these examples are depicted in Figure 2.
A Two Three men women firefighter is climbing come riding on out a a of moto wooden subway scaffold station
Figure 2: Inner-Attention Visualizations.
We observed that more attention was given to Nones, Verbs and Adjectives. This conform to our experience that these words are more semantic richer than function words. While mean pooling re- garded each word of equal importance, the attention mechanism helps re-weight words according to their importance. And more focused and accurate sen- tence representations were generated based on pro- duced attention vectors.
# 4 Conclusion and Future work | 1605.09090#13 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 14 | # 4 Conclusion and Future work
In this paper, we proposed a bidirectional LSTM- based model with Inner-Attention to solve the RTE problem. We come up with an idea to utilize attention mechanism within sentence which can teach itself to attend words without the information from another one. The Inner-Attention mechanism helps produce more accurate sentence representations through attention vectors. In addition, the sim- ple effective diversing input strategy introduced by us further boosts our results. And this model can be easily adapted to other sentence-matching models. Our future work including:
1. Employ this architecture on other sentence- matching tasks such as Question Answer, Para- phrase and Sentence Text Similarity etc.
2. Try more heuristics matching methods to make full use of the sentence vectors.
# Acknowledgments
We thank all anonymous reviewers for their hard work!
# References
[Bos and Markert2005] Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical in- ference. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natu- ral Language Processing, pages 628â635. Association for Computational Linguistics. | 1605.09090#14 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 15 | [Bowman et al.2015] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.
[Bowman et al.2016] Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Man- ning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. arXiv preprint arXiv:1603.06021.
[Mou et al.2015] Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2015. Recogniz- ing entailment and contradiction by tree-based convo- lution. arXiv preprint arXiv:1512.08422.
Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532â1543.
Edward Grefenstette, Karl Moritz Hermann, Tom´aËs KoËcisk`y, 2015. Reasoning about en- and Phil Blunsom. tailment with neural attention. arXiv preprint arXiv:1509.06664. | 1605.09090#15 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.09090 | 16 | [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112.
[Tan et al.2015] Ming Tan, Bing Xiang, and Bowen Lstm-based deep learning models arXiv preprint Zhou. for non-factoid answer selection. arXiv:1511.04108. 2015.
[Tieleman and Hinton2012] Tijmen Tieleman and Geof- frey Hinton. 2012. Lecture 6.5-rmsprop. COURS- ERA: Neural networks for machine learning.
[Vendrov et al.2015] Ivan Vendrov, Ryan Kiros, Sanja Order- Fidler, and Raquel Urtasun. embeddings of images and language. arXiv preprint arXiv:1511.06361.
[Yang et al.2016] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classiï¬- cation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies. | 1605.09090#16 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | [
{
"id": "1512.08422"
},
{
"id": "1509.06664"
},
{
"id": "1511.04108"
},
{
"id": "1603.06021"
},
{
"id": "1511.06361"
},
{
"id": "1508.05326"
}
] |
1605.08803 | 0 | 7 1 0 2
b e F 7 2 ] G L . s c [
3 v 3 0 8 8 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# DENSITY ESTIMATION USING REAL NVP
Laurent Dinhâ Montreal Institute for Learning Algorithms University of Montreal Montreal, QC H3T1J4
Jascha Sohl-Dickstein Google Brain
Samy Bengio Google Brain
# ABSTRACT
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Speciï¬cally, designing models with tractable learning, sam- pling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transforma- tions, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efï¬cient sampling, exact and efï¬cient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
# 1 Introduction | 1605.08803#0 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 1 | # 1 Introduction
The domain of representation learning has undergone tremendous advances due to improved super- vised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible.
One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3], colorization [71], and super-resolution [9].
As data of interest are generally high-dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real-valued non-volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high-dimensional data.
This model can perform efï¬cient and exact inference, sampling and log-density estimation of data points. Moreover, the architecture presented in this paper enables exact and efï¬cient reconstruction of input images from the hierarchical features extracted by this model.
# 2 Related work | 1605.08803#1 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 2 | # 2 Related work
Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efï¬cient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models
âWork was done when author was at Google Brain.
1
Published as a conference paper at ICLR 2017
# Data space X
# Latent space Z
Inference x â¼ ËpX z = f (x) â Generation z â¼ pZ x = f â1 (z) â | 1605.08803#2 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 3 | Inference x â¼ ËpX z = f (x) â Generation z â¼ pZ x = f â1 (z) â
Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution ËpX and a latent distribution pZ (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function f (x) maps samples x from the data distribution in the upper left into approximate samples z from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, f â1 (z), maps samples z from the latent distribution in the lower right into approximate samples x from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in X and Z space is additionally illustrated for both f (x) and f â1 (z).
remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7]. | 1605.08803#3 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 4 | remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7].
Directed graphical models are instead deï¬ned in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efï¬cient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log-likelihood [45]. In particular, the variational autoencoder algorithm [35, 49] simultaneously learns a generative network, that maps gaussian latent variables z to samples x, and a matched approximate inference network that maps samples x to a semantically meaningful latent representation z, by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51, 39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34]. | 1605.08803#4 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 5 | Such approximations can be avoided altogether by abstaining from using latent variables. Auto- regressive models [18, 6, 37, 20] can implement this strategy while typically retaining a great deal of ï¬exibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a ï¬xed ordering over dimensions, simplifying log-likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long-short term memory [26], and residual networks [25, 24] in order to learn state-of-the-art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efï¬ciency. For example, its sampling procedure is sequential and non-parallelizable, which can become cumbersome in applications like speech and music synthesis, or real-time rendering.. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi-supervised learning.
2
Published as a conference paper at ICLR 2017 | 1605.08803#5 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 6 | 2
Published as a conference paper at ICLR 2017
Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable gen- erative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log-likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior.
Training such a generative network g that maps latent variable z â¼ pZ to a sample x â¼ pX does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if g is bijective, it can be trained through maximum likelihood using the change of variable formula:
px (2) = pe(2) fae (22). i) OzT | 1605.08803#6 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 7 | px (2) = pe(2) fae (22). i) OzT
This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto-regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use.
# 3 Model deï¬nition | 1605.08803#7 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 8 | # 3 Model deï¬nition
In this paper, we will tackle the problem of learning highly nonlinear models in high-dimensional continuous spaces through maximum likelihood. In order to optimize the log-likelihood, we introduce a more ï¬exible class of architectures that enables the computation of log-likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we deï¬ne a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a ï¬xed form reconstruction cost such as square error [38, 47], and generates sharper samples as a result. Also, this ï¬exibility helps us leverage recent advances in batch normalization [31] and residual networks [24, 25] to deï¬ne a very deep multi-scale architecture with multiple levels of abstraction.
# 3.1 Change of variable formula
Given an observed data variable x â X, a simple prior probability distribution pZ on a latent variable z â Z, and a bijection f : X â Z (with g = f â1), the change of variable formula deï¬nes a model distribution on X by
act (p2(f(0)))
Of (x px(2) = pe( f(a) act (9?) 0) | 1605.08803#8 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 9 | act (p2(f(0)))
Of (x px(2) = pe( f(a) act (9?) 0)
log (nx(#)) = 10g (p2(f(0))) +1og (face (S55). @)
# where âf (x)
âxT is the Jacobian of f at x.
Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample z â¼ pZ is drawn in the latent space, and its inverse image x = f â1(z) = g(z) generates a sample in the original space. Computing the density on a point x is accomplished by computing the density of its image f (x) and multiplying by the associated Jacobian determinant det . See also Figure 1. Exact and efï¬cient inference enables the accurate and fast evaluation of the model.
3
Published as a conference paper at ICLR 2017
(a) Forward propagation (b) Inverse propagation | 1605.08803#9 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 10 | 3
Published as a conference paper at ICLR 2017
(a) Forward propagation (b) Inverse propagation
Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part x2 of the input vector conditioned on the remaining part of the input vector x1. Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions s and t, signiï¬cantly increase the ï¬exibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost.
# 3.2 Coupling layers
Computing the Jacobian of functions with high-dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions.
As shown however in [17], by careful design of the function f , a bijective model can be learned which is both tractable and extremely ï¬exible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efï¬ciently computed as the product of its diagonal terms. | 1605.08803#10 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 11 | We will build a ï¬exible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an afï¬ne coupling layer. Given a D dimensional input x and d < D, the output y of an afï¬ne coupling layer follows the equations
y1:d = x1:d (4)
Yd+1:D = Td41:D © exp (s(w1:4)) +t(x1:a); (5)
where s and ¢ stand for scale and translation, and are functions from R¢ + R?~4, and © is the Hadamard product or element-wise product (see Figure
# 3.3 Properties
The Jacobian of this transformation is
oy | Ta 0 axt Opese diag (exp [s (1:a)] ) (6) | 1605.08803#11 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 12 | # 3.3 Properties
The Jacobian of this transformation is
oy | Ta 0 axt Opese diag (exp [s (1:a)] ) (6)
where diag ( exp [s (71.a)] ) is the diagonal matrix whose diagonal elements correspond to the vector exp [s (71.a)]. Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as exp | 5> 58 (1:4) | . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of s or t, those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of s and t can have more features than their input and output layers.
Another interesting property of these coupling layers in the context of deï¬ning probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation
4
Published as a conference paper at ICLR 2017 | 1605.08803#12 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 13 | Figure 3: Masking schemes for afï¬ne coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the 4 à 4 à 1 tensor (on the left) into a 2 à 2 à 4 tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward.
(see Figure 2(b)),
{ns = 21d (7) Ydt1:D = Td41:pD © exp (s(a1:a)) + t(@1:4)
# Td
Td = Vid = y 8) r = (yari:p â t(y1:a)) © exp (= s(yi:a)), (
meaning that sampling is as efï¬cient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of s or t, so these functions can be arbitrarily complex and difï¬cult to invert.
# 3.4 Masked convolution
Partitioning can be implemented using a binary mask b, and using the functional form for y,
y=boOrt(1â-db)o (« © exp (s(b© 2)) + 4(bO x)). (9) | 1605.08803#13 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 14 | y=boOrt(1â-db)o (« © exp (s(b© 2)) + 4(bO x)). (9)
We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel-wise masking (see Figure 3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and 0 otherwise. The channel-wise mask b is 1 for the ï¬rst half of the channel dimensions and 0 for the second half. For the models presented here, both s(·) and t(·) are rectiï¬ed convolutional networks.
# 3.5 Combining coupling layers
Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difï¬culty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)).
The Jacobian determinant of the resulting function remains tractable, relying on the fact that
â(fb ⦠fa) âfb âxT âxT a b det(A · B) = det(A) det(B).
(fo © fa) Ofa Ofe , Bar (a) = Farle)» Glee = Lalta)) (10)
(11) | 1605.08803#14 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 16 | OOo Qe. > & â ©) EP RED ED
(a) In this alternating pattern, units which remain identical in one transformation are modiï¬ed in the next.
(b) Factoring out variables. At each step, half the vari- ables are directly modeled as Gaussians, while the other half undergo further transfor- mation.
Figure 4: Composition schemes for afï¬ne coupling layers.
# 3.6 Multi-scale architecture
We implement a multi-scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape 2 à 2 à c, then reshapes them into subsquares of shape 1 à 1 à 4c. The squeezing operation transforms an s às à c tensor into an s 2 à 4c tensor (see Figure 3), effectively trading spatial size for number of channels.
At each scale, we combine several operations into a sequence: we ï¬rst apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and ï¬nally apply three more coupling layers with alternating channel-wise masking. The channel-wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3). For the ï¬nal scale, we only apply four coupling layers with alternating checkerboard masks. | 1605.08803#16 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 17 | Propagating a D dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can deï¬ne this operation recursively (see Figure 4(b)),
h(0) = x (z(i+1), h(i+1)) = f (i+1)(h(i)) z(L) = f (L)(h(Lâ1))
(13)
(14)
(15)
z = (z(1), . . . , z(L)). (16)
In our experiments, we use this operation for i < L. The sequence of coupling-squeezing-coupling operations described above is performed per layer when computing f (i) (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in s and t is doubled. All variables which have been factored out at different scales are concatenated to obtain the ï¬nal transformed output (Equation 16). | 1605.08803#17 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 18 | As a consequence, the model must Gaussianize units which are factored out at a ï¬ner scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the deï¬nition of intermediary levels of representation [53, 49] corresponding to more local, ï¬ne-grained features as shown in Appendix D.
Moreover, Gaussianizing and factoring out units in earlier layers has the practical beneï¬t of distribut- ing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classiï¬ers [40]. It also reduces signiï¬cantly the amount of computation and memory used by the model, allowing us to train larger models.
6
Published as a conference paper at ICLR 2017
# 3.7 Batch normalization
To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in s and t. As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches. | 1605.08803#18 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 19 | We also use apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics ˵ and ËÏ2, the rescaling function
aj th To (17)
has a Jacobian determinant
(0 (a? + 0) . (18) a
This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning [44, 65].
We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient-based approach.
# 4 Experiments
# 4.1 Procedure | 1605.08803#19 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 20 | # 4 Experiments
# 4.1 Procedure
The algorithm described in Equation 2] shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in (0, 256] after application of the recommended jittering procedure [64] [62]. In order to reduce the impact of boundary effects, we instead model the density of logit(a+(1âa) © 55), where a is picked here as .05. We take into account this transformation when computing log-likelihood and bits per dimension. We also augment the CIFAR-10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples. | 1605.08803#20 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 21 | We train our model on four natural image datasets: CIFAR-10 [36], Imagenet [52], Large-scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41]. More speciï¬cally, we train on the downsampled to 32 à 32 and 64 à 64 versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is 96 pixels and take random crops of 64 à 64. For CelebA, we use the same procedure as in [38]: we take an approximately central crop of 148 à 148 then resize it to 64 à 64. | 1605.08803#21 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 22 | We use the multi-scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectiï¬er nonlinearity and skip-connections as suggested by [46]. To compute the scaling functions s, we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function t has an afï¬ne output. Our multi-scale architecture is repeated recursively until the input of the last recursion is a 4 à 4 à c tensor. For datasets of images of size 32 à 32, we use 4 residual blocks with 32 hidden feature maps for the ï¬rst coupling layers with checkerboard masking. Only 2 residual blocks are used for images of size 64 à 64. We use a batch size of 64. For CIFAR-10, we use 8 residual blocks, 64 feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an L2 regularization on the weight scale parameters with coefï¬cient 5 · 10â5. | 1605.08803#22 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 23 | We set the prior pZ to be an isotropic unit norm Gaussian. However, any distribution could be used for pZ, including distributions that are also learned during training, such as from an auto-regressive model, or (with slight modiï¬cations to the training objective) a variational autoencoder.
7
Published as a conference paper at ICLR 2017
Dataset CIFAR-10 Imagenet (32 Ã 32) Imagenet (64 Ã 64) LSUN (bedroom) LSUN (tower) LSUN (church outdoor) CelebA PixelRNN [46] Real NVP Conv DRAW [22] 3.00 3.86 (3.83) 3.63 (3.57) 3.49 4.28 (4.26) 3.98 (3.75) 2.72 (2.70) 2.81 (2.78) 3.08 (2.94) 3.02 (2.97) < 3.59 < 4.40 (4.35) < 4.10 (4.04) IAF-VAE [34] < 3.28
Table 1: Bits/dim results for CIFAR-10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR-10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference). | 1605.08803#23 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 24 | Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this ï¬gure are in order: CIFAR-10, Imagenet (32 à 32), Imagenet (64 à 64), CelebA, LSUN (bedroom).
# 4.2 Results
We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overï¬tting is expected.
We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity
8
Published as a conference paper at ICLR 2017 | 1605.08803#24 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 25 | Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet (64 Ã 64), LSUN (tower), LSUN (bedroom).
over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on ï¬xed form reconstruction cost like an L2 norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efï¬ciently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reï¬ectance and shadows.
We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples 21)» 2(2)» 2(3)> 2(4)> and parametrized by two parameters ¢ and ¢ by, | 1605.08803#25 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 26 | z = cos(¢) (cos(¢â)z(1) + sin(¢â)z(2))
+ sin(@) (cos(墉)z(3) + sin(墉) 2(4))
«
(19)
We project the resulting manifold back into the data space by computing g(z). Results are shown Figure 6. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class-conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix F).
# 5 Discussion and conclusion
In this paper, we have deï¬ned a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log-likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log-likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60]. | 1605.08803#26 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 27 | This paper presented a technique bridging the gap between auto-regressive models, variational autoencoders, and generative adversarial networks. Like auto-regressive models, it allows tractable and exact log-likelihood evaluation for training. It allows however a much more ï¬exible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a ï¬xed form reconstruction cost, and instead deï¬nes a cost in terms of higher level features, generating sharper images. Finally, unlike both variational
9
Published as a conference paper at ICLR 2017
autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi-supervised learning tasks, as we hope to explore in future work. | 1605.08803#27 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 28 | Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto-regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more ï¬exible reconstruction cost [38] and a more ï¬exible stochastic inference distribution [48]. Probabilistic models in general can also beneï¬t from batch normalization techniques as applied in this paper.
The deï¬nition of powerful and trainable invertible functions can also beneï¬t domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous Q- learning [23] or ï¬nd representation where local linear Gaussian approximations are more appropriate [67].
# 6 Acknowledgments | 1605.08803#28 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 29 | # 6 Acknowledgments
The authors thank the developers of Tensorï¬ow [1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aäron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde-Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper.
# References
[1] Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. | 1605.08803#29 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 30 | [3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. Density modeling of images using a generalized normalization transformation. arXiv preprint arXiv:1511.06281, 2015.
[4] Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129â1159, 1995.
[5] Yoshua Bengio. Artiï¬cial neural networks and their application to sequence recognition. 1991. [6] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural
networks. In NIPS, volume 99, pages 400â406, 1999.
[7] Mathias Berglund and Tapani Raiko. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. arXiv preprint arXiv:1312.6002, 2013.
[8] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. | 1605.08803#30 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 31 | [9] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional sufï¬cient statistics. arXiv preprint arXiv:1511.05666, 2015.
[10] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
[11] Scott Shaobing Chen and Ramesh A Gopinath. Gaussianization. In Advances in Neural Information Processing Systems, 2000.
[12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2962â2970, 2015.
[13] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â904, 1995. | 1605.08803#31 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 32 | [14] Gustavo Deco and Wilfried Brauer. Higher order statistical decorrelation without information loss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 247â254. MIT Press, 1995.
[15] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems 28:
10
Published as a conference paper at ICLR 2017
Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1486â1494, 2015.
[16] Luc Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260â265. ACM, 1986.
[17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. | 1605.08803#32 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 33 | [18] Brendan J Frey. Graphical models for machine learning and digital communication. MIT press, 1998. [19] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 262â270, 2015.
[20] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for distribution estimation. CoRR, abs/1502.03509, 2015.
[21] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672â2680, 2014.
[22] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv:1604.08772, 2016. | 1605.08803#33 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 34 | [23] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv:1603.00748, 2016.
[24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016.
[26] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
[27] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303â1347, 2013.
[28] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004. | 1605.08803#34 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 35 | [28] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004.
[29] Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429â439, 1999.
[30] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.
[31] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[32] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016.
[33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. | 1605.08803#35 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 36 | [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[34] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬ow. arXiv preprint arXiv:1606.04934, 2016.
[35] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[36] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [37] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. [38] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using
a learned similarity metric. CoRR, abs/1512.09300, 2015.
[39] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efï¬cient backprop. In Neural networks: Tricks of the trade, pages 9â48. Springer, 2012. | 1605.08803#36 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 37 | [40] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. arXiv preprint arXiv:1409.5185, 2014.
[41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
[42] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.
[43] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
[44] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015. | 1605.08803#37 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 38 | [45] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justiï¬es incremental, sparse, and other variants. In Learning in graphical models, pages 355â368. Springer, 1998.
11
Published as a conference paper at ICLR 2017
[46] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
[47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
[48] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing ï¬ows. arXiv preprint arXiv:1505.05770, 2015.
[49] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[50] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013. | 1605.08803#38 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 39 | [50] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013.
[51] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back- propagating errors. Cognitive modeling, 5(3):1, 1988.
[52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[53] Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In International conference on artiï¬cial intelligence and statistics, pages 448â455, 2009.
[54] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. | 1605.08803#39 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 40 | [55] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014.
[56] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean ï¬eld theory for sigmoid belief networks. Journal of artiï¬cial intelligence research, 4(1):61â76, 1996.
[57] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556, 2014.
[58] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986.
[59] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2256â2265, 2015. | 1605.08803#40 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 41 | [60] Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures. CoRR, abs/1603.08029, 2016.
[61] Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems, pages 1918â1926, 2015.
[62] Lucas Theis, Aäron Van Den Oord, and Matthias Bethge. A note on the evaluation of generative models. CoRR, abs/1511.01844, 2015.
[63] Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprint arXiv:1511.06499, 2015.
[64] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, pages 2175â2183, 2013.
[65] Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016. | 1605.08803#41 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 42 | [66] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
[67] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2728â2736, 2015.
[68] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
[69] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
[70] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
[71] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprint arXiv:1603.08511, 2016.
12
Published as a conference paper at ICLR 2017
A Samples | 1605.08803#42 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 43 | Figure 7: Samples from a model trained on Imagenet (64 Ã 64).
13
Published as a conference paper at ICLR 2017
Figure 8: Samples from a model trained on CelebA.
14
Published as a conference paper at ICLR 2017
iaâ . cee: Tae fe Ft
Figure 9: Samples from a model trained on LSUN (bedroom category).
15
Published as a conference paper at ICLR 2017
âa
Figure 10: Samples from a model trained on LSUN (church outdoor category).
16
Published as a conference paper at ICLR 2017
Figure 11: Samples from a model trained on LSUN (tower category).
17
Published as a conference paper at ICLR 2017
B Manifold
Figure 12: Manifold from a model trained on Jmagenet (64 x 64). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation Tx where the x-axis corresponds to ¢, and the y-axis to ¢â, and where , ¢â ⬠{0,7,---, G}.18
Published as a conference paper at ICLR 2017 | 1605.08803#43 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.