doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1710.10304
23
and 8-shot learning. PixelCNN and Attention PixelCNN models are also fast to train: 10K iterations with batch size 32 took under an hour using NVidia Tesla K80 GPUs. We also report new results of training a ConvDRAW Gregor et al. (2016) on this task. While the likelihoods are significantly worse than those of Attention PixelCNN, they are otherwise state-of- the-art, and qualitatively the samples look as good. We include ConvDRAW samples on Omniglot for comparison in the appendix section 6.2. PixelCNN Model Conditional PixelCNN Attention PixelCNN Meta PixelCNN Attention Meta PixelCNN NLL test(train) 0.077(0.067) 0.066(0.062) 0.068(0.065) 0.069(0.065) Table 2: Omniglot NLL in nats/pixel with four support examples. Attention Meta PixelCNN is a model combining attention with gradient-based weight updates for few-shot learning.
1710.10304#23
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10304
24
Table 2: Omniglot NLL in nats/pixel with four support examples. Attention Meta PixelCNN is a model combining attention with gradient-based weight updates for few-shot learning. Meta PixelCNN also achieves state-of-the-art likelihoods, only outperformed by Attention Pixel- CNN (see Table 2). Naively combining attention and meta learning does not seem to help. How- ever, there are likely more effective ways to combine attention and meta learning, such as varying the inner loss function or using multiple meta-gradient steps, which could be future work. Supports PixelCNN Attention PixelCNN Meta PixelCNN Cz s Ola GAL 3 8 Ae | Ss Figure 4: Typical Omniglot samples from PixelCNN, Attention PixelCNN, and Meta PixelCNN. Figure 1 shows several key frames of the attention model sampling Omniglot. Within each column, the left part shows the 4 support set images. The red overlay indicates the attention head read weights. The red attention pixel is shown over the center of the corresponding patch to which it attends. The right part shows the progress of sampling the image, which proceeds in raster order. We observe that as expected, the network learns to attend to corresponding regions of the support 7 Published as a conference paper at ICLR 2018 set when drawing each portion of the output image. Figure 4 compares results with and without attention. Here, the difference in likelihood clearly correlates with improvement in sample quality.
1710.10304#24
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
25
(a) (b) (c) (d) (e∗) (e) # a) (f) (g) (h) Converged Figure 3: (a) – (g) CELEBA examples corresponding to rows in Table 1. These are intentionally non-converged. (h) Our converged result. Notice that some images show aliasing and some are not sharp – this is a flaw of the dataset, which the model learns to replicate faithfully. resolution. CELEBA is particularly well suited for such comparison because the training images contain noticeable artifacts (aliasing, compression, blur) that are difficult for the generator to repro- duce faithfully. In this test we amplify the differences between training configurations by choosing a relatively low-capacity network structure (Appendix A.2) and terminating the training once the dis- criminator has been shown a total of 10M real images. As such the results are not fully converged.
1710.10196#25
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
25
set when drawing each portion of the output image. Figure 4 compares results with and without attention. Here, the difference in likelihood clearly correlates with improvement in sample quality. 4.3 STANFORD ONLINE PRODUCTS In this section we demonstrate results on natural images from online product listings in the Stanford Online Products Dataset (Song et al., 2016). The data consists of sets of images showing the same product gathered from eBay product listings. There are 12 broad product categories. The training set has 11, 318 distinct objects and the testing set has 11, 316 objects. The task is, given a set of 3 images of a single object, induce a density model over images of that object. This is a very challenging problem because the target image camera is arbitrary and unknown, and the background may also change dramatically. Some products are shown cleanly with a white background, and others are shown in a usage context. Some views show the entire product, and others zoom in on a small region.
1710.10304#25
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
26
Table 1 lists the numerical values for SWD and MS-SSIM in several training configurations, where our individual contributions are cumulatively enabled one by one on top of the baseline (Gulrajani et al., 2017). The MS-SSIM numbers were averaged from 10000 pairs of generated images, and SWD was calculated as described in Section 5. Generated CELEBA images from these configu- rations are shown in Figure 3. Due to space constraints, the figure shows only a small number of examples for each row of the table, but a significantly broader set is available in Appendix H. Intu- itively, a good evaluation metric should reward plausible images that exhibit plenty of variation in colors, textures, and viewpoints. However, this is not captured by MS-SSIM: we can immediately see that configuration (h) generates significantly better images than configuration (a), but MS-SSIM remains approximately unchanged because it measures only the variation between outputs, not sim- ilarity to the training set. SWD, on the other hand, does indicate a clear improvement.
1710.10196#26
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
26
For this dataset, we found it important to use a multiscale architecture as in Reed et al. (2017). We used three scales: 8 × 8, 16 × 16 and 32 × 32. The base scale uses the standard PixelCNN architecture with 12 layers and 128 planes per layer, with 512 planes in the penultimate layer. The upscaling networks use 18 layers with 128 planes each. In Attention PixelCNN, the second half of the layers condition on attention features in both the base and upscaling networks. Source With Attn No Attn Source With Attn No Attn BER ett wee AA w fg FS cHe F oo tht Ss ~-rs eon SGX Gvese dydadh rr aoe 5 Figure 5: Stanford online products. Samples from Attention PixelCNN tend to match textures and colors from the support set, which is less apparent in samples from the non-attentive model. Figure 5 shows the result of sampling with the baseline PixelCNN and the attention model. Note that in cases where fewer than 3 images are available, we simply duplicate other support images. We observe that the baseline model can sometimes generate images of the right broad category, such as bicycles. However, it usually fails to learn the style and texture of the support images. The attention model is able to more accurately capture the objects, in some cases starting to copy textures such as the red character depicted on a white mug. 8
1710.10304#26
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
27
The first training configuration (a) corresponds to Gulrajani et al. (2017), featuring batch normaliza- tion in the generator, layer normalization in the discriminator, and minibatch size of 64. (b) enables progressive growing of the networks, which results in sharper and more believable output images. SWD correctly finds the distribution of generated images to be more similar to the training set. Our primary goal is to enable high output resolutions, and this requires reducing the size of mini- batches in order to stay within the available memory budget. We illustrate the ensuing challenges in (c) where we decrease the minibatch size from 64 to 16. The generated images are unnatural, which is clearly visible in both metrics. In (d), we stabilize the training process by adjusting the hyperpa- rameters as well as by removing batch normalization and layer normalization (Appendix A.2). As an intermediate test (e∗), we enable minibatch discrimination (Salimans et al., 2016), which somewhat surprisingly fails to improve any of the metrics, including MS-SSIM that measures output variation. In contrast, our minibatch standard deviation (e) improves the average SWD scores and images. We then enable our remaining contributions in (f) and (g), leading to an overall improvement in SWD 6 Published as a conference paper at ICLR 2018
1710.10196#27
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
27
8 Published as a conference paper at ICLR 2018 Interestingly, unlike the other datasets we do not observe a quantitative benefit in terms of test like- lihood from the attention model. The baseline model and the attention model achieve 2.15 and 2.14 nats/dim on the validation set, respectively. While likelihood appears to be a useful objective and when combined with attention can generate compelling samples, this suggests that other quantitative criterion besides likelihood may be needed for evaluating few-shot visual concept learning. # 5 CONCLUSIONS In this paper we adapted PixelCNN to the task of few-shot density estimation. Comparing to several strong baselines, we showed that Attention PixelCNN achieves state-of-the-art results on Omniglot and also promising results on natural images. The model is very simple and fast to train. By looking at the attention weights, we see that it learns sensible algorithms for generation tasks such as image mirroring and handwritten character drawing. In the Meta PixelCNN model, we also showed that recently proposed methods for gradient-based meta learning can also be used for few-shot density estimation, and also achieve state-of-the-art results in terms of likelihood on Omniglot. # REFERENCES
1710.10304#27
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
28
6 Published as a conference paper at ICLR 2018 (a) (b) (c) Figure 4: Effect of progressive growing on training speed and convergence. The timings were measured on a single-GPU setup using NVIDIA Tesla P100. (a) Statistical similarity with respect to wall clock time for Gulrajani et al. (2017) using CELEBA at 128 × 128 resolution. Each graph represents sliced Wasserstein distance on one level of the Laplacian pyramid, and the vertical line indicates the point where we stop the training in Table 1. (b) Same graph with progressive growing enabled. The dashed vertical lines indicate points where we double the resolution of G and D. (c) Effect of progressive growing on the raw training speed in 1024 × 1024 resolution. and subjective visual quality. Finally, in (h) we use a non-crippled network and longer training – we feel the quality of the generated images is at least comparable to the best published results so far. # 6.2 CONVERGENCE AND TRAINING SPEED
1710.10196#28
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
28
# REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. S Bartunov and DP Vetrov. Fast adaptation in generative models with generative matching networks. arxiv preprint 1612.02192, 2016. J¨org Bornschein, Andriy Mnih, Daniel Zoran, and Danilo J. Rezende. Variational memory address- ing in generative models. 2017. Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, and Nando de Freitas. Learning to learn for global optimization of black box functions. In ICML, 2017. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248–255, 2009.
1710.10304#28
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
29
Figure 4 illustrates the effect of progressive growing in terms of the SWD metric and raw image throughput. The first two plots correspond to the training configuration of Gulrajani et al. (2017) without and with progressive growing. We observe that the progressive variant offers two main ben- efits: it converges to a considerably better optimum and also reduces the total training time by about a factor of two. The improved convergence is explained by an implicit form of curriculum learn- ing that is imposed by the gradually increasing network capacity. Without progressive growing, all layers of the generator and discriminator are tasked with simultaneously finding succinct interme- diate representations for both the large-scale variation and the small-scale detail. With progressive growing, however, the existing low-resolution layers are likely to have already converged early on, so the networks are only tasked with refining the representations by increasingly smaller-scale ef- fects as new layers are introduced. Indeed, we see in Figure 4(b) that the largest-scale statistical similarity curve (16) reaches its optimal value very quickly and remains consistent
1710.10196#29
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
29
Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, arXiv preprint Pieter Abbeel, and Wojciech Zaremba. arXiv:1703.07326, 2017. One-shot imitation learning. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. 2017a. Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imita- tion learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017b. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo J. Rezende, and Daan Wierstra. Draw: A recur- rent neural network for image generation. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1462–1471, 2015.
1710.10304#29
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
30
Indeed, we see in Figure 4(b) that the largest-scale statistical similarity curve (16) reaches its optimal value very quickly and remains consistent throughout the rest of the training. The smaller-scale curves (32, 64, 128) level off one by one as the resolution is increased, but the convergence of each curve is equally consistent. With non-progressive training in Figure 4(a), each scale of the SWD metric converges roughly in unison, as could be expected.
1710.10196#30
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
30
Karol Gregor, Frederic Besse, Danilo J. Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. In Advances In Neural Information Processing Systems, pp. 3549–3557, 2016. Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949. 9 Published as a conference paper at ICLR 2018 Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In ICANN, pp. 87–94. Springer, 2001. Brenden M Lake, Ruslan R Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In NIPS, pp. 2526–2534, 2013. Gergely Neu and Csaba Szepesv´ari. Apprenticeship learning using inverse reinforcement learning and gradient methods. arXiv preprint arXiv:1206.5264, 2012. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
1710.10304#30
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
31
The speedup from progressive growing increases as the output resolution grows. Figure 4(c) shows training progress, measured in number of real images shown to the discriminator, as a function of training time when the training progresses all the way to 10242 resolution. We see that progressive growing gains a significant head start because the networks are shallow and quick to evaluate at the beginning. Once the full resolution is reached, the image throughput is equal between the two methods. The plot shows that the progressive variant reaches approximately 6.4 million images in 96 hours, whereas it can be extrapolated that the non-progressive variant would take about 520 hours to reach the same point. In this case, the progressive growing offers roughly a 5.4× speedup. # 6.3 HIGH-RESOLUTION IMAGE GENERATION USING CELEBA-HQ DATASET To meaningfully demonstrate our results at high output resolutions, we need a sufficiently varied high-quality dataset. However, virtually all publicly available datasets previously used in GAN literature are limited to relatively low resolutions ranging from 322 to 4802. To this end, we created a high-quality version of the CELEBA dataset consisting of 30000 of the images at 1024 × 1024 resolution. We refer to Appendix C for further details about the generation of this dataset. 7 Published as a conference paper at ICLR 2018
1710.10196#31
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
31
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In ICML, pp. 1060–1069, 2016. Scott E. Reed, A¨aron van den Oord, Nal Kalchbrenner, Sergio G´omez, Ziyu Wang, Dan Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation. In ICML, 2017. Danilo J. Rezende, Ivo Danihelka, Karol Gregor, Daan Wierstra, et al. One-shot generalization In Proceedings of The 33rd International Conference on Machine in deep generative models. Learning, pp. 1521–1529, 2016. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta- learning with memory-augmented neural networks. In ICML, 2016. Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators. ICML, 2017. In
1710.10304#31
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10304
32
Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators. ICML, 2017. In Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificial life, 11(1-2):13–29, 2005. Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89–96, 2007. Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 1998. A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with PixelCNN decoders. In NIPS, 2016. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NIPS, 2016.
1710.10304#32
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
33
Mao et al. (2016b) (128 × 128) Gulrajani et al. (2017) (128 × 128) Our (256 × 256) Figure 6: Visual quality comparison in LSUN BEDROOM; pictures copied from the cited articles. Our contributions allow us to deal with high output resolutions in a robust and efficient fashion. Figure 5 shows selected 1024 × 1024 images produced by our network. While megapixel GAN results have been shown before in another dataset (Marchesi, 2017), our results are vastly more varied and of higher perceptual quality. Please refer to Appendix F for a larger set of result images as well as the nearest neighbors found from the training data. The accompanying video shows latent space interpolations and visualizes the progressive training. The interpolation works so that we first randomize a latent code for each frame (512 components sampled individually from N (0, 1)), then blur the latents across time with a Gaussian (σ = 45 frames @ 60Hz), and finally normalize each vector to lie on a hypersphere. We trained the network on 8 Tesla V100 GPUs for 4 days, after which we no longer observed qualitative differences between the results of consecutive training iterations. Our implementation used an adaptive minibatch size depending on the current output resolution so that the available memory budget was optimally utilized.
1710.10196#33
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10304
33
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048–2057, 2015. 10 Published as a conference paper at ICLR 2018 6 APPENDIX 6.1 ADDITIONAL SAMPLES PixelCNN Attention PixelCNN Meta PixelCNN Figure 6: Flipping 24×24 images, comparing global-conditional, attention-conditional and gradient- conditional (i.e. MAML) PixelCNN. 6.2 QUALITATIVE COMPARISON TO CONVDRAW Although all PixelCNN variants outperform the previous state-of-the-art in terms of likelihood, prior methods can still produce high quality samples, in some cases clearly better than the PixelCNN sam- ples. Of course, there are other important factors in choosing a model that may favor autoregressive models, such as training time and scalability to few-shot density modeling on natural images. Also, the Attention PixelCNN has only 286K parameters, compared to 53M for the ConvDRAW. Still, it is notable that likelihood and sample quality lead to conflicting rankings of several models.
1710.10304#33
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10304
34
The conditional ConvDraw model used for these experiments is a modification of the models intro- duced in (Gregor et al., 2015; Rezende et al., 2016), where the support set images are first encoded with 4 convolution layers without any attention mechanism and then are concatenated to the ConvL- STM state at every Draw step (we used 12 Draw-steps for this paper). The model was trained using the same protocol used for the PixelCNN experiments. Attention PixelCNN samples ConvDRAW samples Support set examples Test NLL = 0.065 nats/dim Test NLL = 0.076 nats/dim 4 By 8 /O § CQ ae ules FOS © 6 A oh ed ash Geko erar Ban mG /+t abv: 4} 2/3 fixes. OnFS aa Ee ne Ue ce eee aly As|uls mw t elt |e a ole agen LE Aw aialb v : Figure 7: Comparison to ConvDRAW in 4-shot learning. 11
1710.10304#34
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
http://arxiv.org/pdf/1710.10304
Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas
cs.NE, cs.CV
null
null
cs.NE
20171027
20180228
[ { "id": "1705.03122" }, { "id": "1709.04905" }, { "id": "1703.07326" } ]
1710.10196
35
# POTTEDPLANT # HORSE # SOFA # Bus # CHURCHOUTDOOR # BICYCLE # TVMONITOR Figure 7: Selection of 256 × 256 images generated from different LSUN categories. 6.4 LSUN RESULTS Figure 6 shows a purely visual comparison between our solution and earlier results in LSUN BED- ROOM. Figure 7 gives selected examples from seven very different LSUN categories at 2562. A larger, non-curated set of results from all 30 LSUN categories is available in Appendix G, and the video demonstrates interpolations. We are not aware of earlier results in most of these categories, and while some categories work better than others, we feel that the overall quality is high. 6.5 CIFAR10 INCEPTION SCORES The best inception scores for CIFAR10 (10 categories of 32 × 32 RGB images) we are aware of are 7.90 for unsupervised and 8.87 for label conditioned setups (Grinblat et al., 2017). The large difference between the two numbers is primarily caused by “ghosts” that necessarily appear between classes in the unsupervised setting, while label conditioning can remove many such transitions.
1710.10196#35
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
36
When all of our contributions are enabled, we get 8.80 in the unsupervised setting. Appendix D shows a representative set of generated images along with a more comprehensive list of results from earlier methods. The network and training setup were the same as for CELEBA, progres- sion limited to 32 × 32 of course. The only customization was to the WGAN-GP’s regularization term Eˆx∼Pˆx[(||∇ˆxD(ˆx)||2 − γ)2/γ2]. Gulrajani et al. (2017) used γ = 1.0, which corresponds to 1-Lipschitz, but we noticed that it is in fact significantly better to prefer fast transitions (γ = 750) to minimize the ghosts. We have not tried this trick with other datasets. # 7 DISCUSSION
1710.10196#36
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
37
# 7 DISCUSSION While the quality of our results is generally high compared to earlier work on GANs, and the training is stable in large resolutions, there is a long way to true photorealism. Semantic sensibility and un- derstanding dataset-dependent constraints, such as certain objects being straight rather than curved, leaves a lot to be desired. There is also room for improvement in the micro-structure of the images. That said, we feel that convincing realism may now be within reach, especially in CELEBA-HQ. 9 Published as a conference paper at ICLR 2018 # 8 ACKNOWLEDGEMENTS We would like to thank Mikael Honkavaara, Tero Kuosmanen, and Timi Hietanen for the compute infrastructure. Dmitry Korobchenko and Richard Calderwood for efforts related to the CELEBA-HQ dataset. Oskar Elek, Jacob Munkberg, and Jon Hasselgren for useful comments. # REFERENCES Martin Arjovsky and L´eon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017.
1710.10196#37
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
38
Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. Sanjeev Arora and Yi Zhang. Do GANs actually learn the distribution? an empirical study. CoRR, abs/1706.08224, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In P. B. Sch¨olkopf, J. C. Platt, and T. Hoffman (eds.), NIPS, pp. 153–160. 2007. David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adver- sarial networks. CoRR, abs/1703.10717, 2017. Peter J. Burt and Edward H. Adelson. Readings in computer vision: Issues, problems, principles, and paradigms. chapter The Laplacian Pyramid As a Compact Image Code, pp. 671–679. 1987.
1710.10196#38
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
39
Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. CoRR, abs/1707.09405, 2017. Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard H. Hovy, and Aaron C. Courville. Calibrat- ing energy-based generative adversarial networks. In ICLR, 2017. Emily L. Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. CoRR, abs/1506.05751, 2015. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. CoRR, abs/1606.00704, 2016. Ishan P. Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi-adversarial networks. CoRR, abs/1611.01673, 2016. Bernd Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D. S. Touretzky, and T. K. Leen (eds.), Advances in Neural Information Processing Systems 7, pp. 625–632. 1995.
1710.10196#39
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
40
Arnab Ghosh, Viveka Kulharia, Vinay P. Namboodiri, Philip H. S. Torr, and Puneet Kumar Dokania. Multi-agent diverse generative adversarial networks. CoRR, abs/1704.02906, 2017. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In NIPS, 2014. Guillermo L. Grinblat, Lucas C. Uzal, and Pablo M. Granitto. Class-splitting generative adversarial networks. CoRR, abs/1709.07359, 2017. Ishaan Gulrajani, Faruk Ahmed, Mart´ın Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Im- proved training of Wasserstein GANs. CoRR, abs/1704.00028, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR, abs/1502.01852, 2015.
1710.10196#40
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
41
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In NIPS, pp. 6626–6637. 2017. 10 Published as a conference paper at ICLR 2018 R Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundary- Seeking Generative Adversarial Networks. CoRR, abs/1702.08431, 2017. Xun Huang, Yixuan Li, Omid Poursaeed, John E. Hopcroft, and Serge J. Belongie. Stacked genera- tive adversarial networks. CoRR, abs/1612.04357, 2016. Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Trans. Graph., 36(4):107:1–107:14, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
1710.10196#41
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
42
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In NIPS, volume 29, pp. 4743– 4751. 2016. Naveen Kodali, Jacob D. Abernethy, James Hays, and Zsolt Kira. How to train your DRAGAN. CoRR, abs/1705.07215, 2017. Dmitry Korobchenko and Marco Foco. Single image super-resolution using deep learning, 2017. URL https://gwmt.nvidia.com/super-res/about. Machines Can See summit. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo- lutional neural networks. In NIPS, pp. 1097–1105. 2012.
1710.10196#42
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
43
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. PacGAN: The power of two samples in generative adversarial networks. CoRR, abs/1712.04086, 2017. Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. CoRR, abs/1703.00848, 2017. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. Alireza Makhzani and Brendan J. Frey. PixelGAN autoencoders. CoRR, abs/1706.00531, 2017. Xiao-Jiao Mao, Chunhua Shen, and Yu-Bin Yang. Image restoration using convolutional auto- encoders with symmetric skip connections. CoRR, abs/1606.08921, 2016a.
1710.10196#43
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
44
Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Least squares generative adversarial networks. CoRR, abs/1611.04076, 2016b. Marco Marchesi. Megapixel size image creation using generative adversarial networks. CoRR, abs/1706.00082, 2017. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. CoRR, abs/1611.02163, 2016. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classifier GANs. In ICML, 2017. Julien Rabin, Gabriel Peyr, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision (SSVM), pp. 435– 446, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. 11 Published as a conference paper at ICLR 2018
1710.10196#44
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
45
11 Published as a conference paper at ICLR 2018 Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. CoRR, abs/1602.07868, 2016. Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. Kenneth O. Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topolo- gies. Evolutionary Computation, 10(2):99–127, 2002. Tijmen Tieleman and Geoffrey E. Hinton. Lecture 6.5 - RMSProp. Technical report, COURSERA: Neural Networks for Machine Learning, 2012. Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Adversarial generator-encoder networks. CoRR, abs/1704.02304, 2017.
1710.10196#45
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
46
A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a. A¨aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, pp. 1747–1756, 2016b. A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016c. Twan van Laarhoven. L2 regularization versus batch and weight normalization. abs/1706.05350, 2017. CoRR, Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional GANs. CoRR, abs/1711.11585, 2017.
1710.10196#46
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
47
Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. Multi-scale structural similarity for image In Proc. IEEE Asilomar Conf. on Signals, Systems, and Computers, pp. quality assessment. 1398–1402, 2003. David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In ICLR, 2017. Jianwei Yang, Anitha Kannan, Dhruv Batra, and Devi Parikh. LR-GAN: layered recursive generative adversarial networks for image generation. In ICLR, 2017. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dim- itris N. Metaxas. StackGAN: text to photo-realistic image synthesis with stacked generative ad- versarial networks. In ICCV, 2017. Junbo Jake Zhao, Micha¨el Mathieu, and Yann LeCun. Energy-based generative adversarial network. In ICLR, 2017.
1710.10196#47
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
49
Generator Latent vector Conv 4 × 4 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Upsample Conv 3 × 3 Conv 3 × 3 Conv 1 × 1 Total trainable parameters Act. – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU linear Output shape 512 × 1 × 1 512 × 4 × 4 512 × 4 × 4 512 × 8 × 8 512 × 8 × 8 512 × 8 × 8 512 × 16 × 16 512 × 16 × 16 512 × 16 × 16 512 × 32 × 32 512 × 32 × 32 512 × 32 × 32 512 × 64 × 64 256 × 64 × 64 256 × 64 × 64 256 × 128 × 128 128 × 128 × 128 128 × 128 × 128 128 × 256 × 256 64 × 256 × 256 64 ×
1710.10196#49
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
50
64 × 64 256 × 64 × 64 256 × 64 × 64 256 × 128 × 128 128 × 128 × 128 128 × 128 × 128 128 × 256 × 256 64 × 256 × 256 64 × 256 × 256 64 × 512 × 512 32 × 512 × 512 32 × 512 × 512 32 × 1024 × 1024 16 × 1024 × 1024 16 × 1024 × 1024 3 × 1024 × 1024 Params – 4.2M 2.4M – 2.4M 2.4M – 2.4M 2.4M – 2.4M 2.4M – 1.2M 590k – 295k 148k – 74k 37k – 18k 9.2k – 4.6k 2.3k 51 23.1M
1710.10196#50
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
51
Discriminator Input image Conv 1 × 1 Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Conv 3 × 3 Conv 3 × 3 Downsample Minibatch stddev Conv 3 × 3 Conv 4 × 4 Fully-connected Total trainable parameters Act. – LReLU LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – LReLU LReLU – – LReLU LReLU linear Output shape 3 × 1024 × 1024 16 × 1024 × 1024 16 × 1024 × 1024 32 × 1024 × 1024 32 × 512 × 512 32 × 512 × 512 64 × 512 × 512 64 × 256 × 256 64 × 256 × 256 128 × 256 × 256 128 × 128 × 128 128 × 128 × 128 256 × 128 × 128 256 × 64 × 64 256 × 64 × 64 512 × 64 × 64 512 × 32 × 32 512 × 32 × 32 512 × 32 × 32 512
1710.10196#51
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
52
× 128 × 128 256 × 128 × 128 256 × 64 × 64 256 × 64 × 64 512 × 64 × 64 512 × 32 × 32 512 × 32 × 32 512 × 32 × 32 512 × 16 × 16 512 × 16 × 16 512 × 16 × 16 512 × 8 × 8 512 × 8 × 8 512 × 8 × 8 512 × 4 × 4 513 × 4 × 4 512 × 4 × 4 512 × 1 × 1 1 × 1 × 1 Params – 64 2.3k 4.6k – 9.2k 18k – 37k 74k – 148k 295k – 590k 1.2M – 2.4M 2.4M – 2.4M 2.4M – 2.4M 2.4M – – 2.4M 4.2M 513 23.1M
1710.10196#52
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
53
Table 2: Generator and discriminator that we use with CELEBA-HQ to generate 1024×1024 images. # A NETWORK STRUCTURE AND TRAINING CONFIGURATION A.1 # 1024 × 1024 NETWORKS USED FOR CELEBA-HQ Table 2 shows network architectures of the full-resolution generator and discriminator that we use with the CELEBA-HQ dataset. Both networks consist mainly of replicated 3-layer blocks that we introduce one by one during the course of the training. The last Conv 1 × 1 layer of the generator corresponds to the toRGB block in Figure 2, and the first Conv 1 × 1 layer of the discriminator similarly corresponds to fromRGB. We start with 4 × 4 resolution and train the networks until we have shown the discriminator 800k real images in total. We then alternate between two phases: fade in the first 3-layer block during the next 800k images, stabilize the networks for 800k images, fade in the next 3-layer block during 800k images, etc.
1710.10196#53
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
54
Our latent vectors correspond to random points on a 512-dimensional hypersphere, and we represent training and generated images in [-1,1]. We use leaky ReLU with leakiness 0.2 in all layers of both networks, except for the last layer that uses linear activation. We do not employ batch normalization, layer normalization, or weight normalization in either network, but we perform pixelwise normal- ization of the feature vectors after each Conv 3 × 3 layer in the generator as described in Section 4.2. We initialize all bias parameters to zero and all weights according to the normal distribution with unit variance. However, we scale the weights with a layer-specific constant at runtime as described in Section 4.1. We inject the across-minibatch standard deviation as an additional feature map at 4 × 4 resolution toward the end of the discriminator as described in Section 3. The upsampling and downsampling operations in Table 2 correspond to 2 × 2 element replication and average pooling, respectively.
1710.10196#54
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
55
We train the networks using Adam (Kingma & Ba, 2015) with a = 0.001, 6: = 0, B2 = 0.99, and ¢« = 1078. We do not use any learning rate decay or rampdown, but for visualizing generator output at any given point during the training, we use an exponential running average for the weights of the generator with decay 0.999. We use a minibatch size 16 for resolutions 42-128? and then gradually decrease the size according to 256? > 14, 512? — 6, and 1024? — 3 to avoid exceeding the available memory budget. We use the WGAN-GP loss, but unlike Gulrajani et al. (2017), we alternate between optimizing the generator and discriminator on a per-minibatch basis, i.e., we set Neritic = 1. Additionally, we introduce a fourth term into the discriminator loss with an extremely 13 Published as a conference paper at ICLR 2018 small weight to keep the discriminator output from drifting too far away from zero. To be precise, we set L! = L + earittE,ep,[D(x)?], where earit, = 0.001. A.2 OTHER NETWORKS Whenever we need to operate on a spatial resolution lower than 1024 × 1024, we do that by leaving out an appropriate number copies of the replicated 3-layer block in both networks.
1710.10196#55
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
56
Whenever we need to operate on a spatial resolution lower than 1024 × 1024, we do that by leaving out an appropriate number copies of the replicated 3-layer block in both networks. Furthermore, Section 6.1 uses a slightly lower-capacity version, where we halve the number of feature maps in Conv 3 × 3 layers at the 16 × 16 resolution, and divide by 4 in the subsequent resolutions. This leaves 32 feature maps to the last Conv 3 × 3 layers. In Table 1 and Figure 4 we train each resolution for a total 600k images instead of 800k, and also fade in new layers for the duration of 600k images.
1710.10196#56
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
57
For the “Gulrajani et al. (2017)” case in Table 1, we follow their training configuration as closely as possible. In particular, we set a = 0.0001, Bo = 0.9, neritic = 5, Earite = 0, and minibatch size 64. We disable progressive resolution, minibatch stddev, as well as weight scaling at runtime, and initialize all weights using He’s initializer (He et al., 2015). Furthermore, we modify the generator by replacing LReLU with ReLU, linear activation with tanh in the last layer, and pixelwise normal- ization with batch normalization. In the discriminator, we add layer normalization to all Conv 3 x 3 and Conv 4 x 4 layers. For the latent vectors, we use 128 components sampled independently from the normal distribution. # B LEAST-SQUARES GAN (LSGAN) AT 1024 × 1024 We find that LSGAN is generally a less stable loss function than WGAN-GP, and it also has a tendency to lose some of the variation towards the end of long runs. Thus we prefer WGAN-GP, but have also produced high-resolution images by building on top of LSGAN. For example, the 10242 images in Figure 1 are LSGAN-based.
1710.10196#57
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
58
On top of the techniques described in Sections 2–4, we need one additional hack with LSGAN that prevents the training from spiraling out of control when the dataset is too easy for the discrimina- tor, and the discriminator gradients are at risk of becoming meaningless as a result. We adaptively increase the magnitude of multiplicative Gaussian noise in discriminator as a function of the dis- criminator’s output. The noise is applied to the input of each Conv 3 × 3 and Conv 4 × 4 layer. There is a long history of adding noise to the discriminator, and it is generally detrimental for the image quality (Arjovsky et al., 2017) and ideally one would never have to do that, which according to our tests is the case for WGAN-GP (Gulrajani et al., 2017). The magnitude of noise is determined as 0.2 · max(0, ˆdt − 0.5)2, where ˆdt = 0.1d + 0.9 ˆdt−1 is an exponential moving average of the discriminator output d. The motivation behind this hack is that LSGAN is seriously unstable when d approaches (or exceeds) 1.0. # C CELEBA-HQ DATASET
1710.10196#58
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
59
# C CELEBA-HQ DATASET In this section we describe the process we used to create the high-quality version of the CELEBA dataset, consisting of 30000 images in 1024 × 1024 resolution. As a starting point, we took the collection of in-the-wild images included as a part of the original CELEBA dataset. These images are extremely varied in terms of resolution and visual quality, ranging all the way from 43 × 55 to 6732 × 8984. Some of them show crowds of several people whereas others focus on the face of a single person – often only a part of the face. Thus, we found it necessary to apply several image processing steps to ensure consistent quality and to center the images on the facial region.
1710.10196#59
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
60
Our processing pipeline is illustrated in Figure 8. To improve the overall image quality, we pre- process each JPEG image using two pre-trained neural networks: a convolutional autoencoder trained to remove JPEG artifacts in natural images, similar in structure to the proposed by Mao et al. (2016a), and an adversarially-trained 4x super-resolution network (Korobchenko & Foco, 2017) similar to Ledig et al. (2016). To handle cases where the facial region extends outside the image, we employ padding and filtering to extend the dimensions of the image as illustrated in Fig.8(c–d). We then select an oriented crop rectangle based on the facial landmark annotations included in the 14 Published as a conference paper at ICLR 2018
1710.10196#60
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
61
(a) (b) (c) (d) (e) (f) Figure 8: Creating the CELEBA-HQ dataset. We start with a JPEG image (a) from the CelebA in- the-wild dataset. We improve the visual quality (b,top) through JPEG artifact removal (b,middle) and 4x super-resolution (b,bottom). We then extend the image through mirror padding (c) and Gaussian filtering (d) to produce a visually pleasing depth-of-field effect. Finally, we use the facial landmark locations to select an appropriate crop region (e) and perform high-quality resampling to obtain the final image at 1024 × 1024 resolution (f). original CELEBA dataset as follows: ve = €,-e 1 1 y' 3 (eo + e1) — 5(mo + m1) 1 c = x(e9+e1)—-0.1-y' s max (4.0 - |2’|, 3.6 - |y’|) x = Normalize (zx’ — Rotate90(y’)) Rotate90(x)
1710.10196#61
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
62
e0, e1, m0, and m1 represent the 2D pixel locations of the two eye landmarks and two mouth landmarks, respectively, c and s indicate the center and size of the desired crop rectangle, and x and y indicate its orientation. We constructed the above formulas empirically to ensure that the crop rectangle stays consistent in cases where the face is viewed from different angles. Once we have calculated the crop rectangle, we transform the rectangle to 4096 × 4096 pixels using bilinear filtering, and then scale it to 1024 × 1024 resolution using a box filter. We perform the above processing for all 202599 images in the dataset, analyze the resulting 1024 × 1024 images further to estimate the final image quality, sort the images accordingly, and discard all but the best 30000 images. We use a frequency-based quality metric that favors images whose power spectrum contains a broad range of frequencies and is approximately radially symmetric. This penalizes blurry images as well as images that have conspicuous directional features due to, e.g., visible halftoning patterns. We selected the cutoff point of 30000 images as a practical sweet spot between variation and image quality, because it appeared to yield the best results. # D CIFAR10 RESULTS
1710.10196#62
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
63
# D CIFAR10 RESULTS Figure 9 shows non-curated images generated in the unsupervised setting, and Table 3 compares against prior art in terms of inception scores. We report our scores in two different ways: 1) the highest score observed during training runs (here ± refers to the standard deviation returned by the inception score calculator) and 2) the mean and standard deviation computed from the highest scores seen during training, starting from ten random initializations. Arguably the latter methodology is much more meaningful as one can be lucky with individual runs (as we were). We did not use any kind of augmentation with this dataset. # E MNIST-1K DISCRETE MODE TEST WITH CRIPPLED DISCRIMINATOR Metz et al. (2016) describe a setup where a generator synthesizes MNIST digits simultaneously to 3 color channels, the digits are classified using a pre-trained classifier (0.4% error rate in our case), and concatenated to form a number in [0, 999]. They generate a total of 25,600 images and count 15 Published as a conference paper at ICLR 2018
1710.10196#63
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
65
UNSUPERVISED LABEL CONDITIONED Inception score Method 5.34 ± 0.05 ALI 6.00 ± 0.19 GMAN (Durugkar et al., 2016) 6.86 ± 0.06 Improved GAN (Salimans et al., 2016) 7.07 ± 0.07 CEGAN-Ent-VI (Dai et al., 2017) 7.17 ± 0.17 LR-AGN (Yang et al., 2017) (Warde-Farley & Bengio, 2017) 7.72 ± 0.13 DFM 7.86 ± 0.07 WGAN-GP (Gulrajani et al., 2017) 7.90 ± 0.09 Splitting GAN (Grinblat et al., 2017) 8.80 ± 0.05 Our (best run) 8.56 ± 0.06 Our (computed from 10 runs) (Dumoulin et al., 2016) Method DCGAN (Radford et al., 2015) Improved GAN (Salimans et al., 2016) 8.09 ± 0.07 8.25 ± 0.07 AC-GAN (Odena et al., 2017) 8.59 ± 0.12 SGAN (Huang et al., 2016) (Gulrajani et al., 2017) 8.67 ± 0.14 WGAN-GP 8.87 ± 0.09 Splitting GAN
1710.10196#65
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
67
Table 3: CIFAR10 inception scores, higher is better. how many of the discrete modes are covered. They also compute KL divergence as KL(histogram || uniform). Modern GAN implementations can trivially cover all modes at very low divergence (0.05 in our case), and thus Metz et al. specify a fairly low-capacity generator and two severely crippled discriminators (“K/2” has ∼ 2000 params and “K/4” only about ∼ 500) to tease out differences between training methodologies. Both of these networks use batch normalization. As shown in Table 4, using WGAN-GP loss with the networks specified by Metz et al. covers much more modes than the original GAN loss, and even more than the unrolled original GAN with the smaller (K/4) discriminator. The KL divergence, which is arguably a more accurate metric than the raw count, acts even more favorably. Replacing batch normalization with our normalization (equalized learning rate, pixelwise normal- ization) improves the result considerably, while also removing a few trainable parameters from the discriminators. The addition of a minibatch stddev layer further improves the scores, while restoring the discriminator capacity to within 0.5% of the original. Progression does not help much with these tiny images, but it does not hurt either. 16
1710.10196#67
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
68
16 Published as a conference paper at ICLR 2018 Arch K/4 K/2 # KL # KL GAN 30.6 ± 20.7 5.99 ± 0.04 628.0 ± 140.9 2.58 ± 0.75 + unrolling 372.2 ± 20.7 4.66 ± 0.46 817.4 ± 39.9 1.43 ± 0.12 WGAN-GP 640.1 ± 136.3 1.97 ± 0.70 772.4 ± 146.5 1.35 ± 0.55 + our norm 856.7 ± 50.4 1.10±0.19 886.6 ± 58.5 0.98 ± 0.33 + mb stddev 881.3 ± 39.2 1.09 ± 0.16 918.3 ± 30.2 0.89 ± 0.21 + progression 859.5 ± 36.2 1.05 ± 0.09 919.8 ± 35.1 0.82 ± 0.13
1710.10196#68
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
69
Table 4: Results for MNIST discrete mode test using two tiny discriminators (K/4, K/2) defined by Metz et al. (2016). The number of covered modes (#) and KL divergence from a uniform distribution are given as an average ± standard deviation over 8 random initializations. Higher is better for the number of modes, and lower is better for KL divergence. # F ADDITIONAL CELEBA-HQ RESULTS Figure 10 shows the nearest neighbors found for our generated images. Figure 11 gives addi- tional generated examples from CELEBA-HQ. We enabled mirror augmentation for all tests using CELEBA and CELEBA-HQ. In addition to the sliced Wasserstein distance (SWD), we also quote the recently introduced Fr´echet Inception Distance (FID) (Heusel et al., 2017) computed from 50K images. # G LSUN RESULTS Figures 12–17 show representative images generated for all 30 LSUN categories. A separate net- work was trained for each category using identical parameters. All categories were trained using 100k images, except for BEDROOM and DOG that used all the available data. Since 100k images is a very limited amount of training data for most categories, we enabled mirror augmentation in these tests (but not for BEDROOM or DOG). # H ADDITIONAL IMAGES FOR TABLE 1
1710.10196#69
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
71
Figure 10: Top: Our CELEBA-HQ results. Next five rows: Nearest neighbors found from the train- ing data, based on feature-space distance. We used activations from five VGG layers, as suggested by Chen & Koltun (2017). Only the crop highlighted in bottom right image was used for comparison in order to exclude image background and focus the search on matching facial features. 18 Published as a conference paper at ICLR 2018 =e i = d e@ — a (Qe Ge | ‘ie Gas cs Ne te ‘ WP oy: SSS ; = 7 = , jj ¥ . > ie Ww 4 ‘ R @ Figure 11: Additional 1024×1024 images generated using the CELEBA-HQ dataset. Sliced Wasser- stein Distance (SWD) ×103 for levels 1024, . . . , 16: 7.48, 7.24, 6.08, 3.51, 3.55, 3.02, 7.22, for which the average is 5.44. Fr´echet Inception Distance (FID) computed from 50K images was 7.30. See the video for latent space interpolations. 19 Published as a conference paper at ICLR 2018
1710.10196#71
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
72
E N A L P R I A 9 2 . 4 , 4 5 . 7 7 9 . 3 1 D I F , 8 0 . 3 , 0 5 . 3 , 7 2 . 3 , 5 0 . 4 D W S M O O R D E B 0 9 . 3 , 8 0 . 9 4 3 . 8 D I F , 0 9 . 2 , 4 3 . 2 , 5 4 2 . , 2 7 2 D W S . E L C Y C I B 0 9 4 . , 0 2 0 1 . 2 1 6 1 D I F . , 7 2 3 . , 5 2 2 . , 3 3 3 . , 5 4 5 D W S . D R I B 8 2 4 . , 6 8 8 . , 6 5 2 . 1 9 9 2 D I F , . 5 6 2 . , 2 7 2 . , 8 5 4 D W S . T A O B 2 9 . 3 , 4 9 . 4 , 7 6 . 2 5 8 . 0 1 D I F , 3 8 . 2 , 1 4 . 3 , 5 7 . 5 D W S
1710.10196#72
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
74
20 Published as a conference paper at ICLR 2018 E L T T O B 3 9 . 3 , 6 0 . 6 8 6 . 2 2 D I F , 3 2 . 3 , 1 9 . 2 , 7 1 . 3 , 0 3 . 4 D W S E G D I R B 0 4 . 4 , 1 7 . 7 4 1 . 1 1 D I F , 5 5 . 2 , 1 3 . 3 , 4 0 4 . , 9 3 4 D W S . S U B 7 4 4 . , 9 8 7 . , 9 9 6 D I F . 8 4 3 . , 2 5 2 . , 0 9 2 . , 9 5 5 D W S . R A C 0 5 3 . , 9 3 6 . , 6 3 8 D I F . 9 2 2 . , 7 1 2 . , 6 2 2 . , 9 3 4 D W S . T A C 6 7 . 8 , 6 8 . 7 , 2 1 . 5 2 5 . 7 3 D I F , 7 4 . 7 , 4 1 . 1 1
1710.10196#74
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
76
R I A H C 5 4 . 5 , 3 4 . 1 1 5 5 . 9 1 D I F , 7 6 . 3 , 2 5 . 3 , 0 7 . 3 , 3 9 . 4 D W S R O O D T U O H C R U H C 5 7 . 3 , 5 5 . 5 2 4 . 6 D I F , 6 6 . 2 , 8 0 . 3 , 3 6 3 . , 1 8 3 D W S . M O O R S S A L C 2 0 4 . , 3 8 6 . , 6 3 0 2 D I F . 1 0 3 . , 3 7 2 . , 2 4 3 . , 1 1 4 D W S . M O O R E C N E R E F N O C 0 7 4 . , 1 9 9 . , 1 3 8 D I F . 3 6 2 . , 5 1 3 . , 0 1 4 . , 8 6 3 D W S . W O C 7 8 . 3 , 5 7 . 6 , 4 2 . 2 6 2 . 7 1 D I F , 1 1 . 2 , 9 6 . 2 , 7 5 . 5 D W S
1710.10196#76
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
78
M O O R G N I N I D 2 5 . 3 , 2 3 . 6 0 4 . 6 D I F , 3 1 . 3 , 3 7 . 2 , 4 9 . 2 , 7 4 . 2 D W S E L B A T G N I N I D 2 2 . 3 , 2 8 . 6 8 2 . 0 1 D I F , 3 0 . 2 , 4 3 . 2 , 1 5 2 . , 1 4 2 D W S . G O D 3 8 5 . , 3 8 8 . , 3 6 7 4 D I F . 2 3 2 . , 6 0 3 . , 6 3 5 . , 6 5 9 D W S . E S R O H 8 5 3 . , 4 9 6 . , 1 1 6 1 D I F . 3 0 2 . , 0 0 2 . , 5 1 2 . , 8 7 4 D W S . N E H C T I K 8 8 . 3 , 1 3 . 6 , 2 7 . 2 5 4 . 7 D I F , 5 9 . 2 , 4 5 . 3 , 9 8 . 3 D W S
1710.10196#78
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
80
M O O R G N I V I L 4 9 . 3 , 3 8 . 5 6 4 . 0 1 D I F , 2 2 . 3 , 9 6 . 2 , 7 3 . 3 , 5 5 . 4 D W S E K I B R O T O M 4 4 . 4 , 4 0 . 0 1 1 6 . 9 D I F , 4 0 . 2 , 9 9 . 1 , 3 3 2 . , 0 8 5 D W S . N O S R E P 2 7 3 . , 4 1 7 . , 1 0 0 3 D I F . 1 3 2 . , 7 3 2 . , 8 5 2 . , 0 2 4 D W S . T N A L P D E T T O P 4 8 5 . , 0 9 0 1 . , 0 5 3 1 D I F . 5 8 2 . , 5 8 3 . , 3 1 4 . , 6 4 7 D W S . T N A R U A T S E R 3 6 . 3 , 5 5 . 5 , 4 4 . 4 9 3 . 6 1 D I F
1710.10196#80
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
82
24 Published as a conference paper at ICLR 2018 P E E H S 5 7 . 3 , 8 0 . 6 8 1 . 8 1 D I F , 5 9 . 2 , 0 1 . 2 , 7 1 . 2 , 5 4 . 5 D W S A F O S 4 1 . 4 , 6 9 . 8 9 5 . 8 D I F , 9 3 . 2 , 8 0 . 3 , 7 3 3 . , 8 8 2 D W S . R E W O T 9 1 4 . , 9 7 6 . , 9 2 0 1 D I F . 7 6 2 . , 9 4 3 . , 2 2 4 . , 0 8 3 D W S . N I A R T 2 6 4 . , 3 1 8 . , 0 9 8 D I F . 5 6 3 . , 8 9 2 . , 8 0 3 . , 6 2 5 D W S . R O T I N O M V T 3 1 . 5 , 7 3 . 9 , 0 0 . 3 7 4 . 2 2 D I F , 5 7 . 3 , 3 5 . 4 , 2 0 . 5 D W S
1710.10196#82
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.10196
83
Figure 17: Example images generated at 256 × 256 from LSUN categories. Sliced Wasserstein Distance (SWD) ×103 is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fr´echet Inception Distance (FID) computed from 50K images. 25 Published as a conference paper at ICLR 2018 ) 7 1 0 2 ( . l a t e i n a j a r l u G ) a ( g n i w o r g e v i s s e r g o r P ) b ( h c t a b i n i m l l a m S ) c ( s r e t e m a r a p g n i n i a r t d e s i v e R ) d ( n o i t a n i m i r c s i d h c t a b i n i M ) ∗ e ( v e d d t s h c t a b i n i M ) e ( e t a r g n i n r a e l d e z i l a u q E ) f ( n o i t a z i l a m r o n e s i w l e x i # P ) g (
1710.10196#83
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
http://arxiv.org/pdf/1710.10196
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
cs.NE, cs.LG, stat.ML
Final ICLR 2018 version
null
cs.NE
20171027
20180226
[]
1710.06481
1
# Sebastian Riedel1,2 # Abstract Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or docu- ment. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently no resources exist to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and com- bine evidence – effectively performing multi- hop, alias multi-step, inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced,1 and we identify potential pitfalls and devise circum- vention strategies. We evaluate two previ- ously proposed competitive models and find that one can integrate information across doc- uments. However, both models struggle to se- lect relevant information; and providing doc- uments guaranteed to be relevant greatly im- proves their performance. While the mod- els outperform several strong baselines, their best accuracy reaches 54.5% on an annotated test set, compared to human performance at 85.0%, leaving ample room for improvement.
1710.06481#1
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
2
The Hanging Gardens, in [Mumbai], also known as Pherozeshah Mehta Gardens, are terraced gardens ... They provide sunset views over the [Arabian Sea] .. Mumbai (also known as Bombay, the official name until 1995) is the capital city of the Indian state of Maharashtra. It is the most populous city in India ... The Arabian Sea is a region of the northern Indian Ocean bounded on the north by Pakistan and Iran, on the west by northeastern Somalia and the Arabian Peninsula, and on the east by India .. Q: (Hanging gardens of Mumbai, country, ?) Options: {Iran, India, Pakistan, Somalia, ...} Figure 1: A sample from the WIKIHOP dataset where it is necessary to combine information spread across multi- ple documents to infer the correct answer.
1710.06481#2
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
3
Figure 1: A sample from the WIKIHOP dataset where it is necessary to combine information spread across multi- ple documents to infer the correct answer. been a longstanding challenge in Natural Language Processing (NLP). Contemporary end-to-end Read- ing Comprehension (RC) methods can learn to ex- tract the correct answer span within a given text and approach human-level performance (Kadlec et al., 2016; Seo et al., 2017a). However, for exist- ing datasets, relevant information is often concen- trated locally within a single sentence, emphasizing the role of locating, matching, and aligning informa- tion between query and support text. For example, Weissenborn et al. (2017) observed that a simple bi- nary word-in-query indicator feature boosted the rel- ative accuracy of a baseline model by 27.9%. # Introduction Devising computer systems capable of answering questions about knowledge described using text has 1Available at http://qangaroo.cs.ucl.ac.uk
1710.06481#3
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
4
# Introduction Devising computer systems capable of answering questions about knowledge described using text has 1Available at http://qangaroo.cs.ucl.ac.uk We argue that, in order to further the ability of ma- chine comprehension methods to extract knowledge from text, we must move beyond a scenario where information is coherently and explicitly relevant stated within a single document. Methods with this capability would aid Information Extraction (IE) ap- plications, such as discovering drug-drug interactions (Gurulingappa et al., 2012) by connecting pro- tein interactions reported across different publica- tions. They would also benefit search (Carpineto and Romano, 2012) and Question Answering (QA) ap- plications (Lin and Pantel, 2001) where the required information cannot be found in a single location.
1710.06481#4
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
5
Figure 1 shows an example from WIKIPEDIA, where the goal is to identify the country property of the Hanging Gardens of Mumbai. This cannot be inferred solely from the article about them without additional background knowledge, as the answer is not stated explicitly. However, several of the linked articles mention the correct answer India (and other countries), but cover different topics (e.g. Mumbai, Arabian Sea, etc.). Finding the answer requires multi-hop reasoning: figuring out that the Hanging Gardens are located in Mumbai, and then, from a second document, that Mumbai is a city in India.
1710.06481#5
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
6
We define a novel RC task in which a model should learn to answer queries by combining ev- idence stated across documents. We introduce a methodology to induce datasets for this task and de- rive two datasets. The first, WIKIHOP, uses sets of WIKIPEDIA articles where answers to queries about specific properties of an entity cannot be located in the entity’s article. In the second dataset, MEDHOP, the goal is to establish drug-drug interactions based on scientific findings about drugs and proteins and their interactions, found across multiple MEDLINE abstracts. For both datasets we draw upon existing Knowledge Bases (KBs), WIKIDATA and DRUG- BANK, as ground truth, utilizing distant supervi- sion (Mintz et al., 2009) to induce the data – similar to Hewlett et al. (2016) and Joshi et al. (2017).
1710.06481#6
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
7
We establish that for 74.1% and 68.0% of the samples, the answer can be inferred from the given documents by a human annotator. Still, construct- ing multi-document datasets is challenging; we en- counter and prescribe remedies for several pitfalls associated with their assembly – for example, spuri- ous co-locations of answers and specific documents. For both datasets we then establish several strong baselines and evaluate the performance of two pre- viously proposed competitive RC models (Seo et al., 2017a; Weissenborn et al., 2017). We find that one can integrate information across documents, but nei- ther excels at selecting relevant information from a larger documents set, as their accuracy increases significantly when given only documents guaranteed to be relevant. The best model reaches 54.5% on an annotated test set, compared to human performance at 85.0%, indicating ample room for improvement. In summary, our key contributions are as follows: Firstly, proposing a cross-document multi-step RC task, as well as a general dataset induction strat- egy. Secondly, assembling two datasets from dif- ferent domains and identifying dataset construction pitfalls and remedies. Thirdly, establishing multiple baselines, including two recently proposed RC mod- els, as well as analysing model behaviour in detail through ablation studies. # 2 Task and Dataset Construction Method
1710.06481#7
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
8
# 2 Task and Dataset Construction Method We will now formally define the multi-hop RC task, and a generic methodology to construct multi-hop RC datasets. Later, in Sections 3 and 4 we will demonstrate how this method is applied in practice by creating datasets for two different domains. Task Formalization A model is given a query q, a set of supporting documents Sq, and a set of candi- date answers Cq – all of which are mentioned in Sq. The goal is to identify the correct answer a∗ ∈ Cq by drawing on the support documents Sq. Queries could potentially have several true answers when not constrained to rely on a specific set of support doc- uments – e.g., queries about the parent of a certain individual. However, in our setup each sample has only one true answer among Cq and Sq. Note that even though we will utilize background information during dataset assembly, such information will not be available to a model: the document set will be provided in random order and without any metadata. While certainly beneficial, this would distract from our goal of fostering end-to-end RC methods that in- fer facts by combining separate facts stated in text.
1710.06481#8
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
9
Dataset Assembly We assume that there exists a document corpus D, together with a KB containing fact triples (s, r, o) – with subject entity s, relation r, and object entity o. For example, one such fact could be (Hanging Gardens of Mumbai, country, India). We start with individual KB facts and trans- form them into query-answer pairs by leaving the object slot empty, i.e. q = (s, r, ?) and a∗ = o. Next, we define a directed bipartite graph, where vertices on one side correspond to documents in D, and vertices on the other side are entities from the KB – see Figure 2 for an example. A docu- ment node d is connected to an entity e if e is men- tioned in d, though there may be further constraints when defining the graph connectivity. For a given (q, a∗) pair, the candidates Cq and support docu- ments Sq ⊆ D are identified by traversing the bipar- tite graph using breadth-first search; the documents visited will become the support documents Sq.
1710.06481#9
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
10
As the traversal starting point, we use the node belonging to the subject entity s of the query g. As traversal end points, we use the set of all entity nodes that are type-consistent answers to q.* Note that whenever there is another fact (s, 7,0’) in the KB, i.e. a fact producing the same q but with a different a*, we will not include o’ into the set of end points for this sample. This ensures that precisely one of the end points corresponds to a correct answer to q.° When traversing the graph starting at s, several end points will be visited, though generally not all; those visited define the candidate set C,. If however the correct answer a* is not among them we discard the (q,a*) pair. The documents visited to reach the end points will define the support document set S,. That is, S, comprises chains of documents leading not only from the query subject to the correct an- swer, but also to type-consistent false candidates.
1710.06481#10
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
11
With this methodology, relevant textual evidence for (q, a∗) will be spread across documents along the chain connecting s and a∗ – ensuring that multi- hop reasoning goes beyond resolving co-reference within a single document. Note that including other type-consistent candidates alongside a∗ as end points in the graph traversal – and thus into the sup- port documents – renders the task considerably more challenging (Jia and Liang, 2017). Models could otherwise identify a∗ in the documents by simply relying on type-consistency heuristics. It is worth pointing out that by introducing alternative candi- dates we counterbalance a type-consistency bias, in contrast to Hermann et al. (2015) and Hill et al. (2016) who instead rely on entity masking. 2 To determine entities which are type-consistent for a query q, we consider all entities which are observed as object in a fact with r as relation type – including the correct answer. 3Here we rely on a closed-world assumption; that is, we assume that the facts in the KB state all true facts. Entities 8 Documents KB (s,r, 0)
1710.06481#11
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
12
Entities 8 Documents KB (s,r, 0) Figure 2: A bipartite graph connecting entities and doc- uments mentioning them. Bold edges are those traversed for the first fact in the small KB on the right; yellow high- lighting indicates documents in Sq and candidates in Cq. Check and cross indicate correct and false candidates. # 3 WIKIHOP WIKIPEDIA contains an abundance of human- curated, multi-domain information and has sev- eral structured resources such as infoboxes and WIKIDATA (Vrandeˇci´c, 2012) associated with it. WIKIPEDIA has thus been used for a wealth of re- search to build datasets posing queries about a single sentence (Morales et al., 2016; Levy et al., 2017) or article (Yang et al., 2015; Hewlett et al., 2016; Ra- jpurkar et al., 2016). However, no attempt has been made to construct a cross-document multi-step RC dataset based on WIKIPEDIA.
1710.06481#12
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
13
A recently proposed RC dataset is WIKIREAD- ING (Hewlett et al., 2016), where WIKIDATA tu- ples (item, property, answer) are aligned with the WIKIPEDIA articles regarding their item. The tuples define a slot filling task with the goal of pre- dicting the answer, given an article and property. One problem with using WIKIREADING as an ex- tractive RC dataset is that 54.4% of the samples do not state the answer explicitly in the given arti- cle (Hewlett et al., 2016). However, we observed that some of the articles accessible by following hy- perlinks from the given article often state the answer, alongside other plausible candidates. # 3.1 Assembly We now apply the methodology from Section 2 to create a multi-hop dataset with WIKIPEDIA as the document corpus and WIKIDATA as structured knowledge triples. In this setup, (item, property, answer) WIKIDATA tuples correspond to (s, r, o) triples, and the item and property of each sample
1710.06481#13
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
14
together form our query q – e.g., (Hanging Gardens of Mumbai, country, ?). Similar to Yang et al. (2015) we only use the first paragraph of an article, as rel- evant information is more often stated in the begin- ning. Starting with all samples in WIKIREADING, we first remove samples where the answer is stated explicitly in the WIKIPEDIA article about the item.4 The bipartite graph is structured as follows: (1) for edges from articles to entities: all articles mentioning an entity e are connected to e; (2) for edges from entities to articles: each entity e is only connected to the WIKIPEDIA article about the entity. Traversing the graph is then equivalent to iteratively following hyperlinks to new articles about the an- chor text entities.
1710.06481#14
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
15
For a given query-answer pair, the item entity is chosen as the starting point for the graph traver- sal. A traversal will always pass through the article about the item, since this is the only document con- nected from there. The end point set includes the correct answer alongside other type-consistent can- didate expressions, which are determined by consid- ering all facts belonging to WIKIREADING train- ing samples, selecting those triples with the same property as in q and keeping their answer expres- sions. As an example, for the WIKIDATA property country, this would be the set {France, Russia, ...}. We executed graph traversal up to a maximum chain length of 3 documents. To not pose unreasonable computational constraints, samples with more than 64 different support documents or 100 candidates are removed, discarding ≈1% of the samples. # 3.2 Mitigating Dataset Biases Dataset creation is always fraught with the risk of inducing unintended errors and biases (Chen et al., 2016; Schwartz et al., 2017). As Hewlett et al. (2016) only carried out limited analysis of their WIKIREADING dataset, we present an analysis of the downstream effects we observe on WIKIHOP.
1710.06481#15
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
16
Candidate Frequency Imbalance A first obser- vation is that there is a significant bias in the answer distribution of WIKIREADING. For example, in the majority of the samples the property country has the United States of America as the answer. A simple 4 We thus use a disjoint subset of WIKIREADING compared to Levy et al. (2017) to construct WIKIHOP. majority class baseline would thus prove successful, but would tell us little about multi-hop reasoning. To combat this issue, we subsampled the dataset to en- sure that samples of any one particular answer can- didate make up no more than 0.1% of the dataset, and omitted articles about the United States. Document-Answer Correlations A problem unique to our multi-document setting is the possibil- ity of spurious correlations between candidates and documents induced by the graph traversal method. In fact, if we were not to address this issue, a model designed to exploit these regularities could achieve 74.6% accuracy (detailed in Section 6).
1710.06481#16
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
17
Concretely, we observed that certain documents frequently co-occur with the correct answer, inde- pendently of the query. For example, if the article about London is present in Sq, the answer is likely to be the United Kingdom, independent of the query type or entity in question. Appendix C contains a list with several additional examples. We designed a statistic to measure this effect and then used it to sub-sample the dataset. The statistic counts how often a candidate c is observed as the correct answer when a certain document is present in Sq across training set samples. More for- mally, for a given document d and answer candi- date c, let cooccurrence(d, c) denote the total count of how often d co-occurs with c in a sample where c is also the correct answer. We use this statistic to filter the dataset, by discarding samples with at least one document-candidate pair (d, c) for which cooccurrence(d, c) > 20. # 4 MEDHOP
1710.06481#17
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
18
# 4 MEDHOP Following the same general methodology, we next construct a second dataset for the domain of molec- ular biology – a field that has been undergoing ex- ponential growth in the number of publications (Co- hen and Hunter, 2004). The promise of applying NLP methods to cope with this increase has led to research efforts in IE (Hirschman et al., 2005; Kim et al., 2011) and QA for biomedical text (Hersh et al., 2007; Nentidis et al., 2017). There are a plethora of manually curated structured resources (Ashburner et al., 2000; The UniProt Consortium, 2017) which can either serve as ground truth or to induce training data using distant supervision (Craven and Kumlien, 1999; Bobic et al., 2012). Existing RC datasets are either severely limited in size (Hersh et al., 2007) or cover a very diverse set of query types (Nentidis et al., 2017), complicating the application of neu- ral models that have seen successes for other do- mains (Wiese et al., 2017).
1710.06481#18
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
19
A task that has received significant attention is detecting Drug-Drug Interactions (DDIs). Exist- ing DDI efforts have focused on explicit mentions of interactions in single sentences (Gurulingappa et al., 2012; Percha et al., 2012; Segura-Bedmar et al., 2013). However, as shown by Peng et al. (2017), cross-sentence relation extraction increases the number of available relations. It is thus likely that cross-document interactions would further im- prove recall, which is of particular importance con- sidering interactions that are never stated explicitly – but rather need to be inferred from separate pieces of evidence. The promise of multi-hop methods is finding and combining individual observations that can suggest previously unobserved DDIs, aiding the process of making scientific discoveries, yet not di- rectly from experiments, but by inferring them from established public knowledge (Swanson, 1986).
1710.06481#19
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
20
DDIs are caused by Protein-Protein Interac- tion (PPI) chains, forming biomedical pathways. If we consider PPI chains across documents, we find examples like in Figure 3. Here the first document states that the drug Leuprolide causes GnRH receptor-induced synaptic potenti- ations, which can be blocked by the protein Progonadoliberin-1. The last document states that another drug, Triptorelin, is a superagonist of the same protein. It is therefore likely to affect the po- tency of Leuprolide, describing a way in which the two drugs interact. Besides the true interaction there is also a false candidate Urofollitropin for which, although mentioned together with GnRH receptor within one document, there is no textual evidence indicating interactions with Leuprolide. # 4.1 Assembly We construct MEDHOP using DRUGBANK (Law et al., 2014) as structured knowledge resource and research paper abstracts from MEDLINE as docu- ments. There is only one relation type for DRUG- BANK facts, interacts with, that connects pairs of drugs – an example of a MEDHOP query would thus
1710.06481#20
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
21
Leuprolide ... elicited a long-lasting potentiation of excitatory postsynaptic currents... [GnRH receptor]-induced synaptic potentiation was blocked by [Progonadoliberin-1], a specific [€nRH receptor] antagonist our research to study the distribution, co-localization of Urofollitropin and its receptor[,] and co-localization of Urofollitropin and GnRH receptor... Analyses of gene expression demonstrated a dynamic response to the Progonadoliberin-1 superagonist Triptorelin. Q: (Leuprolide, interacts_with, ?) Options: {Triptorelin, Urofollitropin} # Figure 3: A sample from the MEDHOP dataset. be (Leuprolide, interacts with, ?). We start by processing the 2016 MEDLINE release using the preprocessing pipeline employed for the BioNLP 2011 Shared Task (Stenetorp et al., 2011). We re- strict the set of entities in the bipartite graph to drugs in DRUGBANK and human proteins in SWISS- PROT (Bairoch et al., 2004). That is, the graph has drugs and proteins on one side, and MEDLINE ab- stracts on the other.
1710.06481#21
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
22
The edge structure is as follows: (1) There is an edge from a document to all proteins mentioned in it. (2) There is an edge between a document and a drug, if this document also mentions a protein known to be a target for the drug according to DRUGBANK. This edge is bidirectional, i.e. it can be traversed both ways, since there is no canonical document describ- ing each drug — thus one can “hop” to any document mentioning the drug and its target. (3) There is an edge from a protein p to a document mentioning p, but only if the document also mentions another pro- tein p’ which is known to interact with p according to REACTOME (Fabregat et al., 2016). Given our dis- tant supervision assumption, these additionally con- straining requirements err on the side of precision.
1710.06481#22
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
23
As a mention, similar to Percha et al. (2012), we consider any exact match of a name variant of a drug or human protein in DRUGBANK or SWISS- PROT. For a given DDI (drug1, interacts with, drug2), we then select drug1 as the starting point for the graph traversal. As possible end points, we consider any other drug, apart from drug1 and those interacting with drug1 other than drug2. Similar to WIKIHOP, we exclude samples with more than 64 support documents and impose a maximum docu- ment length of 300 tokens plus title. Document Sub-sampling The bipartite graph for MEDHOP is orders of magnitude more densely connected than for WIKIHOP. This can lead to poten- tially large support document sets Sq, to a degree where it becomes computationally infeasible for a majority of existing RC models. After the traver- sal has finished, we subsample documents by first adding a set of documents that connects the drug in the query with its answer. We then iteratively add documents to connect alternative candidates until we reach the limit of 64 documents – while ensuring that all candidates have the same number of paths through the bipartite graph.
1710.06481#23
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
24
Mitigating Candidate Frequency Imbalance Some drugs interact with more drugs than others – Aspirin for example interacts with 743 other drugs, but Isotretinoin with only 34. This leads to similar candidate frequency imbalance issues as with WIKIHOP – but due to its smaller size MEDHOP is difficult to sub-sample. Nevertheless we can successfully combat this issue by masking entity names, detailed in Section 6.2. # 5 Dataset Analysis
1710.06481#24
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
25
# 5 Dataset Analysis Table 1 shows the dataset sizes. Note that WIK- IHOP inherits the train, development, and test set splits from WIKIREADING – i.e., the full dataset creation, filtering, and sub-sampling pipeline is ex- ecuted on each set individually. Also note that sub- sampling according to document-answer correlation significantly reduces the size of WIKIHOP from ≈528K training samples to ≈44K. While in terms of samples, both WIKIHOP and MEDHOP are smaller than other large-scale RC datasets, such as SQuAD and WIKIREADING, the supervised learning signal available per sample is arguably greater. One could, for example, re-frame the task as binary path clas- sification: given two entities and a document path connecting them, determine whether a given rela- tion holds. For such a case, WIKIHOP and MED- HOP would have more than 1M and 150K paths to be classified, respectively. Instead, in our formula- tion, this corresponds to each single sample contain- ing the supervised learning signal from an average of 19.5 and 59.8 unique document paths.
1710.06481#25
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
26
Table 2 shows statistics on the number of candi- dates and documents per sample on the respective training sets. For MEDHOP, the majority of sam- ples have 9 candidates, due to the way documents Train Dev Test Total WIKIHOP MEDHOP 43,738 1,620 5,129 342 2,451 546 51,318 2,508 Table 1: Dataset sizes for our respective datasets. # cand. – WH # docs. – WH # tok/doc – WH 2 3 4 79 63 2,046 19.8 13.7 100.4 14 11 91 # cand. – MH # docs. – MH # tok/doc – MH 2 5 5 9 64 458 8.9 36.4 253.9 9 29 264 Table 2: Candidates and documents per sample and doc- ument length statistics. WH: WIKIHOP; MH: MEDHOP. are selected up until a maximum of 64 documents is reached. Few samples have less than 9 candidates, and samples would have far more false candidates if more than 64 support documents were included. The number of query types in WIKIHOP is 277, whereas in MEDHOP there is only one: interacts with. # 5.1 Qualitative Analysis To establish the quality of the data and analyze po- tential distant supervision errors, we sampled and annotated 100 samples from each development set.
1710.06481#26
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
27
# 5.1 Qualitative Analysis To establish the quality of the data and analyze po- tential distant supervision errors, we sampled and annotated 100 samples from each development set. WIKIHOP Table 3 lists characteristics along with the proportion of samples that exhibit them. For 45%, the true answer either uniquely follows from multiple texts directly or is suggested as likely. For 26%, more than one candidate is plausibly sup- ported by the documents, including the correct an- swer. This is often due to hypernymy, where the appropriate level of granularity for the an- swer is difficult to predict – e.g. (west suffolk, with candidates administrative entity, ?) suffolk and england. This is a direct conse- quence of including type-consistent false answer candidates from WIKIDATA, which can lead to ques- tions with several true answers. For 9% of the cases a single document suffices; these samples contain a document that states enough information about item and answer together. For example, the query (Louis Auguste, father, ?) has the correct answer Louis XIV of France, and French
1710.06481#27
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
29
Table 3: Qualitiative analysis of WIKIHOP samples. king Louis XIV is mentioned within the same doc- ument as Louis Auguste. Finally, although our task is significantly more complex than most pre- vious tasks where distant supervision has been ap- plied, the distant supervision assumption is only vi- olated for 20% of the samples – a proportion sim- ilar to previous work (Riedel et al., 2010). These cases can either be due to conflicting information be- tween WIKIDATA and WIKIPEDIA (8%), e.g. when the date of birth for a person differs between WIKI- DATA and what is stated in the WIKIPEDIA article, or because the answer is consistent but cannot be inferred from the support documents (12%). When answering 100 questions, the annotator knew the an- swer prior to reading the documents for 9%, and pro- duced the correct answer after reading the document sets for 74% of the cases. On 100 questions of a val- idated portion of the Dev set (see Section 5.3), 85% accuracy was reached.
1710.06481#29
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
30
MEDHOP Since both document complexity and number of documents per sample were significantly larger compared to WIKIHOP, (see Figure 4 in Ap- pendix B) it was not feasible to ask an annota- tor to read all support documents for 100 samples. We opted to verify the dataset quality by providing only the subset of documents relevant to support the correct answer, i.e., those traversed along the path reaching the answer. The annotator was asked if the answer to the query “follows”, “is likely”, or “does not follow”, given the relevant documents. 68% of the cases were considered as “follows” or as “is likely”. The majority of cases violating the distant supervision assumption were due to lacking a nec- essary PPI in one of the connecting documents. # 5.2 Crowdsourced Human Annotation
1710.06481#30
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
31
# 5.2 Crowdsourced Human Annotation We asked human annotators on Amazon Mechanical Turk to evaluate samples of the WIKIHOP develop- ment set. Similar to our qualitative analysis of MED- HOP, annotators were shown the query-answer pair as a fact and the chain of relevant documents leading to the answer. They were then instructed to answer (1) whether they knew the fact before; (2) whether the fact follows from the texts (with options “fact follows”, “fact is likely”, and “fact does not fol- low”); and (3); whether a single or several of the documents are required. Each sample was shown to three annotators and a majority vote was used to ag- gregate the annotations. Annotators were familiar with the fact 4.6% of the time; prior knowledge of the fact is thus not likely to be a confounding effect on the other judgments. Inter-annotator agreement as measured by Fleiss’ kappa is 0.253 in (2), and 0.281 in (3) – indicating a fair overall agreement, ac- cording to Landis and Koch (1977). Overall, 9.5% of samples have no clear majority in (2).
1710.06481#31
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
33
Among the samples with a majority vote for (2) of either “follows” or “likely”, 55.9% were marked with a majority vote as requiring multiple docu- ments to infer the fact, and 44.1% as requiring only a single document. The latter number is larger than initially expected, given the construction of samples through graph traversal. However, when inspecting cases judged as “single” more closely, we observed that many indeed provide a clear hint about the cor- rect answer within one document, but without stat- ing it explicitly. For example, for the fact (witold cichy, country of citizenship, poland) with documents d1: Witold Cichy (born March 15, 1986 in Wodzisaw lski) is a Polish footballer[...] and d2: Wodzisaw lski[...] is a town in Silesian Voivodeship, southern Poland[...], the information provided in d1 suffices for a human given the background knowl- edge that Polish is an attribute related to Poland, re- moving the need for d2 to infer the answer. # 5.3 Validated Test Sets
1710.06481#33
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
34
# 5.3 Validated Test Sets While training models on distantly supervised data is useful, one should ideally evaluate methods on a manually validated test set. We thus identified sub- sets of the respective test sets for which the correct answer can be inferred from the text. This is in con- trast to prior work such as Hermann et al. (2015), Hill et al. (2016), and Hewlett et al. (2016), who evaluate only on distantly supervised samples. For WIKIHOP, we applied the same annotation strategy as described in Section 5.2. The validated test set consists of those samples labeled by a majority of annotators (at least 2 of 3) as “follows”, and requir- ing “multiple” documents. While desirable, crowd- sourcing is not feasible for MEDHOP since it re- quires specialist knowledge. In addition, the number of document paths is ≈3x larger, which along with the complexity of the documents greatly increases the annotation time. We thus manually annotated 20% of the MEDHOP test set and identified the sam- ples for which the text implies the correct answer and where multiple documents are required. # 6 Experiments
1710.06481#34
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
35
# 6 Experiments This section describes experiments on WIKIHOP and MEDHOP with the goal of establishing the per- formance of several baseline models, including re- cent neural RC models. We empirically demonstrate the importance of mitigating dataset biases, probe whether multi-step behavior is beneficial for solv- ing the task, and investigate if RC models can learn to perform lexical abstraction. Training will be con- ducted on the respective training sets, and evaluation on both the full test set and validated portion (Sec- tion 5.3) allowing for a comparison between the two. # 6.1 Models Random Selects a random candidate; note that the number of candidates differs between samples. Max-mention Predicts the most frequently men- tioned candidate in the support documents Sq of a sample – randomly breaking ties. Majority-candidate-per-query-type Predicts the candidate c ∈ Cq that was most frequently observed as the true answer in the training set, given the query type of q. For WIKIHOP, the query type is the prop- erty p of the query; for MEDHOP there is only the single query type – interacts with.
1710.06481#35
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
36
type of q. For WIKIHOP, the query type is the prop- erty p of the query; for MEDHOP there is only the single query type – interacts with. TF-IDF Retrieval-based models are known to be strong QA baselines if candidate answers are pro- vided (Clark et al., 2016; Welbl et al., 2017). They search for individual documents based on keywords in the question, but typically do not combine infor- mation across documents. The purpose of this base- line is to see if it is possible to identify the correct answer from a single document alone through lexi- cal correlations. The model forms its prediction as follows: For each candidate c, the concatenation of the query q with c is fed as an OR query into the whoosh text retrieval engine.5 It then predicts the candidate with the highest TF-IDF similarity score: arg max c∈Cq [max s∈Sq (TF-IDF(q + c, s))] (1) Document-cue During dataset construction we observed that certain document-answer pairs appear more frequently than others, to the effect that the correct candidate is often indicated solely by the presence of certain documents in Sq. This baseline captures how easy it is for a model to exploit these It informative document-answer co-occurrences. predicts the candidate with highest score across Cq:
1710.06481#36
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
37
arg max c∈Cq [max d∈Sq (cooccurrence(d, c))] (2) Extractive RC models: FastQA and BiDAF In our experiments we evaluate two recently proposed LSTM-based extractive QA models: the Bidirec- tional Attention Flow model (BiDAF, Seo et al. (2017a)), and FastQA (Weissenborn et al., 2017), which have shown a robust performance across sev- eral datasets. These models predict an answer span within a single document. We adapt them to a multi- document setting by sequentially concatenating all d ∈ Sq in random order into a superdocument, adding document separator tokens. During training, the first answer mention in the concatenated docu- ment serves as the gold span.6 At test time, we mea- sured accuracy based on the exact match between 5 https://pypi.python.org/pypi/Whoosh/ 6 We also tested assigning the gold span randomly to any one of the mention of the answer, with insignificant changes.
1710.06481#37
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
38
one of the mention of the answer, with insignificant changes. the prediction and answer, both lowercased, after re- moving articles, trailing white spaces and punctu- ation, in the same way as Rajpurkar et al. (2016). To rule out any signal stemming from the order of documents in the superdocument, this order is ran- domized both at training and test time. In a prelimi- nary experiment we also trained models using differ- ent random document order permutations, but found that performance did not change significantly. For BiDAF, the default hyperparameters from the implementation of Seo et al. (2017a) are used, with pretrained GloVe (Pennington et al., 2014) embed- dings. However, we restrict the maximum docu- ment length to 8,192 tokens and hidden size to 20, and train for 5,000 iterations with batchsize 16 in or- der to fit the model into memory.7 For FastQA we use the implementation provided by the authors, also with pre-trained GloVe embeddings, no character- embeddings, no maximum support length, hidden size 50, and batch size 64 for 50 epochs.
1710.06481#38
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
39
While BiDAF and FastQA were initially devel- oped and tested on single-hop RC datasets, their us- age of bidirectional LSTMs and attention over the full sequence theoretically gives them the capacity to integrate information from different locations in the (super-)document. In addition, BiDAF employs iterative conditioning across multiple layers, poten- tially making it even better suited to integrate infor- mation found across the sequence. # 6.2 Lexical Abstraction: Candidate Masking The presence of lexical regularities among an- swers is a problem in RC dataset assembly – a phenomenon already observed by Hermann et al. (2015). When comprehending a text, the correct an- swer should become clear from its context – rather than from an intrinsic property of the answer ex- pression. To evaluate the ability of models to rely on context alone, we created masked versions of the datasets: we replace any candidate expression randomly using 100 unique placeholder tokens, e.g. “Mumbai is the most populous city in MASK7.” Masking is consistent within one sample, but gen- erally different for the same expression across sam- ples. This not only removes answer frequency cues, 7 The superdocument has a larger number of tokens com- pared to e.g. SQuAD, thus the additional memory requirements.
1710.06481#39
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
40
7 The superdocument has a larger number of tokens com- pared to e.g. SQuAD, thus the additional memory requirements. Model Unfiltered Filtered Document-cue Maj. candidate TF-IDF 74.6 41.2 43.8 36.7 38.8 25.6 Train set size 527,773 43,738 Table 4: Accuracy comparison for simple baseline mod- els on WIKIHOP before and after filtering. it also removes statistical correlations between fre- quent answer strings and support documents. Mod- els consequently cannot base their prediction on in- trinsic properties of the answer expression, but have to rely on the context surrounding the mentions. # 6.3 Results and Discussion
1710.06481#40
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
41
# 6.3 Results and Discussion Table 5 shows the experimental outcomes for WIK- IHOP and MEDHOP, together with results for the masked setting; we will first discuss the former. A first observation is that candidate mention frequency does not produce better predictions than a random guess. Predicting the answer most frequently ob- served at training time achieves strong results: as much as 38.8% / 44.2% and 58.4% / 67.3% on the two datasets, for the full and validated test sets re- spectively. That is, a simple frequency statistic to- gether with answer type constraints alone is a rela- tively strong predictor, and the strongest overall for the “unmasked” version of MEDHOP. The TF-IDF retrieval baseline clearly performs better than random for WIKIHOP, but is not very strong overall. That is, the question tokens are help- ful to detect relevant documents, but exploiting only this information compares poorly to the other base- lines. On the other hand, as no co-mention of an interacting drug pair occurs within any single doc- ument in MEDHOP, the TF-IDF baseline performs worse than random. We conclude that lexical match- ing with a single support document is not enough to build a strong predictive model for both datasets.
1710.06481#41
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]
1710.06481
42
The Document-cue baseline can predict more than a third of the samples correctly, for both datasets, even after sub-sampling frequent document-answer pairs for WIKIHOP. The relative strength of this and other baselines proves to be an important is- sue when designing multi-hop datasets, which we addressed through the measures described in Sec# WIKIHOP # MEDHOP Model standard test test* masked test test* standard test test* masked test test* Random Max-mention Majority-candidate-per-query-type TF-IDF Document-cue 11.5 10.6 38.8 25.6 36.7 12.2 15.9 44.2 36.7 41.7 12.2 13.9 12.0 14.4 7.4 13.0 20.1 13.7 24.2 20.3 13.9 9.5 58.4 9.0 44.9 20.4 16.3 67.3 14.3 53.1 14.1 9.2 10.4 8.8 15.2 22.4 16.3 6.1 14.3 16.3 FastQA BiDAF 25.7 42.9 27.2 49.7 35.8 54.5 38.0 59.8 23.1 47.8 24.5 61.2 31.3 33.7 30.6 42.9
1710.06481#42
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Most Reading Comprehension methods limit themselves to queries which can be answered using a single sentence, paragraph, or document. Enabling models to combine disjoint pieces of textual evidence would extend the scope of machine comprehension methods, but currently there exist no resources to train and test this capability. We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods. In our task, a model learns to seek and combine evidence - effectively performing multi-hop (alias multi-step) inference. We devise a methodology to produce datasets for this task, given a collection of query-answer pairs and thematically linked documents. Two datasets from different domains are induced, and we identify potential pitfalls and devise circumvention strategies. We evaluate two previously proposed competitive models and find that one can integrate information across documents. However, both models struggle to select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models outperform several strong baselines, their best accuracy reaches 42.9% compared to human performance at 74.0% - leaving ample room for improvement.
http://arxiv.org/pdf/1710.06481
Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
cs.CL, cs.AI
This paper directly corresponds to the TACL version (https://transacl.org/ojs/index.php/tacl/article/view/1325) apart from minor changes in wording, additional footnotes, and appendices
Transactions of the Association for Computational Linguistics (TACL), Vol 6 (2018), pages 287-302
cs.CL
20171017
20180611
[]