id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.01626#42
Combining policy gradient and Q-learning
Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236. 12 Published as a conference paper at ICLR 2017 Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. arXiv preprint arXiv:1609.00150, 2016. Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013. Edwin Pednault, Naoki Abe, and Bianca Zadrozny. Sequential cost-sensitive decision making with reinforcement learning. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 259â
1611.01626#41
1611.01626#43
1611.01626
[ "1602.01783" ]
1611.01626#43
Combining policy gradient and Q-learning
268. ACM, 2002. Jing Peng and Ronald J Williams. Incremental multi-step Q-learning. Machine Learning, 22(1-3): 283â 290, 1996. Jan Peters, Katharina M¨ulling, and Yasemin Altun. Relative entropy policy search. In AAAI. Atlanta, 2010. Martin Riedmiller. Neural ï¬ tted Q iterationâ ï¬ rst experiences with a data efï¬ cient neural reinforce- ment learning method. In Machine Learning: ECML 2005, pp. 317â 328. Springer Berlin Heidel- berg, 2005. Gavin A Rummery and Mahesan Niranjan.
1611.01626#42
1611.01626#44
1611.01626
[ "1602.01783" ]
1611.01626#44
Combining policy gradient and Q-learning
On-line Q-learning using connectionist systems. 1994. Brian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions. Journal of Machine Learning Research, 5(Aug):1063â 1088, 2004. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1889â
1611.01626#43
1611.01626#45
1611.01626
[ "1602.01783" ]
1611.01626#45
Combining policy gradient and Q-learning
1897, 2015. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 387â 395, 2014. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.
1611.01626#44
1611.01626#46
1611.01626
[ "1602.01783" ]
1611.01626#46
Combining policy gradient and Q-learning
Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. R. Sutton and A. Barto. Reinforcement Learning: an Introduction. MIT Press, 1998. Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â 44, 1988. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al.
1611.01626#45
1611.01626#47
1611.01626
[ "1602.01783" ]
1611.01626#47
Combining policy gradient and Q-learning
Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Infor- mation Processing Systems, volume 99, pp. 1057â 1063, 1999. Gerald Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38 (3):58â 68, 1995. Philip Thomas. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â 448, 2014.
1611.01626#46
1611.01626#48
1611.01626
[ "1602.01783" ]
1611.01626#48
Combining policy gradient and Q-learning
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q- learning. In Proceedings of the Thirtieth AAAI Conference on Artiï¬ cial Intelligence (AAAI-16), pp. 2094â 2100, 2016. 13 Published as a conference paper at ICLR 2017 Harm Van Seijen, Hado Van Hasselt, Shimon Whiteson, and Marco Wiering. A theoretical and em- pirical analysis of expected sarsa. In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 177â
1611.01626#47
1611.01626#49
1611.01626
[ "1602.01783" ]
1611.01626#49
Combining policy gradient and Q-learning
184. IEEE, 2009. Yin-Hao Wang, Tzuu-Hseng S Li, and Chih-Jui Lin. Backward q-learning: The combination of sarsa algorithm and q-learning. Engineering Applications of Artiï¬ cial Intelligence, 26(9):2184â 2193, 2013. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd Inter- national Conference on Machine Learning (ICML), pp. 1995â 2003, 2016.
1611.01626#48
1611.01626#50
1611.01626
[ "1602.01783" ]
1611.01626#50
Combining policy gradient and Q-learning
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â 268, 1991. # A PGQL BELLMAN RESIDUAL Here we demonstrate that in the tabular case the Bellman residual of the induced Q-values for the PGQL updates of converges to zero as the temperature a decreases, which is the same guarantee as vanilla regularized policy gradient 2). We will use the notation that 7r is the policy at the fixed point of PGQL updates (14) for some a, i.â ¬., Ty, « exp(Q** ), with induced Q-value function Q7*. First, note that we can apply the same argument as in to show that limg_59 |T*Q7= â Tr Qt || = 0 (the only difference is that we lack the property that Qt« is the fixed point of T7«). Secondly, from equation we can write Q** â Qâ ¢ = n(T*Qâ ¢ â Qâ ¢). Combining these two facts we have ]Qr> â Qâ ¢|| n\|T*Q7 â Q* || Qâ ¢|| n\|T*Q7 â Q* || . n\|T*Qâ ¢* â TQ + TQ â Qâ ¢ || n(\|T*Q7%* â Tâ ¢Qâ ¢ || + ||T*2Q7* â TQ" |I) n(\|T*Q7* â T° Qâ ¢ || + y|]Q7" â Qâ ¢ ||) n/(Lâ y)IIT*Q7* â TQ ||, IAIA IA Il and so ||Q*= â Q7«|| + 0. as a â 0. Using this fact we have |T*Q7 _ Qt I |T*Q"" _ To Qt +4 Tr Qt â Qt 4 Qâ ¢ â Qâ ¢ I 7 T*Qâ ¢* â
1611.01626#49
1611.01626#51
1611.01626
[ "1602.01783" ]
1611.01626#51
Combining policy gradient and Q-learning
T° Q* | + ||T*Qâ ¢ â T° Qâ ¢ || + []Q7 â Qâ ¢ |] |T*Qr â T%Qr|| + (1+ 7)|1Q% â Q"| 3/1 = m)|T*Q â T*Qâ ¢ |], AIAIA Il which therefore also converges to zero in the limit. Finally we obtain TQ" â Qe) = |T*Qt â T2Q% + T*Qe â Ge + Qe = Qâ ¢| Il = |T*Qt â T2Q% + T*Qe â Ge + Qe = Qâ ¢| T*Qt â T*Q*|| + ||T*Q2 â Qe || + ]Q7* â Q7|| (L+ V[lQ7 â Q* || + |T*Q7> â Qâ ¢ |], IAIA Il which combined with the two previous results implies that lim, _,9 ||T*Q"* â Qâ * || = 0, as before. 14
1611.01626#50
1611.01626#52
1611.01626
[ "1602.01783" ]
1611.01626#52
Combining policy gradient and Q-learning
Published as a conference paper at ICLR 2017 # B ATARI SCORES Game alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬ shing derby freeway frostbite gopher gravitar hero ice hockey jamesbond kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert riverraid road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham up n down venture video pinball wizard of wor yars revenge zaxxon A3C 38.43 68.69 854.64 191.69 24.37 15496.01 210.28 21.63 59.55 79.38 2.70 510.30 2341.13 50.22 61.13 510.25 475.93 4027.57 1250.00 9.94 140.84 -0.26 5.85 429.76 0.71 145.71 62.25 133.90 -0.94 736.30 182.34 -0.49 17.91 102.01 447.05 5.48 116.37 -0.88 186.91 107.25 603.11 15.71 3.81 54.27 27.05 188.65 756.60 28.29 145.58 270.74 224.76 1637.01 -1.76 3007.37 150.52 81.54 4.01 Q-learning 25.53 12.29 1695.21 98.53 5.32 13635.88 91.80 2.89 79.94 55.55 -7.09 299.49 3291.22 105.98 19.18 189.01 58.94 3449.27 91.35 9.94 -14.48 -0.13 10.71 9131.97 1.35 15.47 21.57 110.97 -0.94 3586.30 260.14 1.80 10.71 113.89 812.99 5.49 24.96 0.03 159.71 65.01 179.69 134.87 3.71 54.10 34.61 146.39 205.70 -1.51 -15.35 91.59 110.11 148.10 -1.76 4325.02 88.07 23.39 44.11 PGQL 46.70 71.00 2802.87 3790.08 50.23 16217.49 212.15 52.00 155.71 92.85 3.85 902.77 2959.16 73.88 162.93 476.11 911.13 3994.49 1375.00 9.94 145.57 -0.13 5.71 2060.41 1.74 92.88 76.96 142.08 -0.75 557.44 254.42 -0.48 25.76 188.90 1507.07 5.49 116.37 -0.04 136.17 128.63 519.51 71.50 5.88 54.16 28.66 608.44 977.99 78.15 145.58 438.50 239.58 1484.43 -1.76 4743.68 325.39 252.83 224.89
1611.01626#51
1611.01626#53
1611.01626
[ "1602.01783" ]
1611.01626#53
Combining policy gradient and Q-learning
Table 3: Normalized scores for the Atari suite from random starts, as a percentage of human nor- malized score. 15
1611.01626#52
1611.01626
[ "1602.01783" ]
1611.01673#0
Generative Multi-Adversarial Networks
7 1 0 2 r a M 2 ] G L . s c [ 3 v 3 7 6 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # GENERATIVE MULTI-ADVERSARIAL NETWORKS # Ishan Durugkar*, Ian Gemp*, Sridhar Mahadevan Durugkar*, Gemp*, College of Information and Computer Sciences University of Massachusetts, Amherst Amherst, MA 01060, USA {idurugkar, imgemp, mahadeva}@cs.umass.edu # ABSTRACT
1611.01673#1
1611.01673
[ "1511.06390" ]
1611.01673#1
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a gen- erative model by way of a two-player minimax game. In this paper, we propose the Generative Multi-Adversarial Network (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objec- tive. We explore a number of design perspectives with the discriminator role rang- ing from formidable adversary to forgiving teacher. Image generation tasks com- paring the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
1611.01673#0
1611.01673#2
1611.01673
[ "1511.06390" ]
1611.01673#2
Generative Multi-Adversarial Networks
# 1 INTRODUCTION Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing generative model by way of a two-player minimax game. One player, the generator, attempts to generate realistic data samples by transforming noisy samples, z, drawn from a simple distribution (e.g., z ~ N(0, 1)) using a transformation function Gg(z) with learned weights, 9. The generator receives feedback as to how realistic its synthetic sample is from another player, the discriminator, which attempts to discern between synthetic data samples produced by the generator and samples drawn from an actual dataset using a function D,,(«) with learned weights, w.
1611.01673#1
1611.01673#3
1611.01673
[ "1511.06390" ]
1611.01673#3
Generative Multi-Adversarial Networks
# a The GAN framework is one of the more recent successes in a line of research on adversarial train- ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games between learners are carefully crafted so that Nash equilibria coincide with some set of desired op- timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun et al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli- cation domains including learning censored representations (Edwards & Storkey (2015)), imitating expert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending GANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014); Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning (Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015); Radford et al. (2015)) have shown promise as well. Despite these successes, GANs are reputably difficult to train. While research is still underway to improve training techniques and heuristics (Salimans et al. (2016)), most approaches have focused on understanding and generalizing GANs theoretically with the aim of exploring more tractable formulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)). In this paper, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4, we present our V-discriminator extension to the GAN framework (Generative Multi-Adversarial Networks) with several variants which range the role of the discriminator from formidable adversary to forgiving teacher. Section 4.2 explains how this extension makes training with the untampered minimax objective tractable.
1611.01673#2
1611.01673#4
1611.01673
[ "1511.06390" ]
1611.01673#4
Generative Multi-Adversarial Networks
In Section 5, we define an intuitive metric (GMAM) to quantify GMAN *Equal contribution Published as a conference paper at ICLR 2017 performance and evaluate our framework on a variety of image generation tasks. Section 6 concludes with a summary of our contributions and directions for future research. Contributionsâ To summarize, our main contributions are: i) a multi-discriminator GAN frame- work, GMAN, that allows training with the original, untampered minimax objective; ii) a generative multi-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks; iii) a particular instance of GMAN, GMANâ %, that allows the generator to automatically regulate training and reach higher performance (as measured by GMAM) in a fraction of the training time required for the standard GAN model. 2 GENERATIVE ADVERSARIAL NETWORKS TO GMAN The original formulation of a GAN is a minimax game between a generator, Gg(z) : z > x, anda discriminator, D,,(x) : « > {0, 1], min max V(D,G) = Exnpsata(e) [1og(D(x))| + Ezvp.(z) [log(t - D(Ge)))| ; (1) where Pdata(2) is the true data distribution and p,(z) is a simple (usually fixed) distribution that is easy to draw samples from (e.g., (0, 1)). We differentiate between the function space of discrim- inators, D, and elements of this space, D. Let pg(«) be the distribution induced by the generator, Go(z). We assume D, G to be deep neural networks as is typically the case. In their original work, Goodfellow et al. (2014) proved that given sufficient network capacities and an oracle providing the optimal discriminator, D* = arg maxp V (D,G), gradient descent on pa(x) will recover the desired globally optimal solution, pg(x) = Paata(x), so that the generator distribution exactly matches the data distribution. In practice, they replaced the second term, log(1â D(G(z))), with â log(D(G(z))) to enhance gradient signals at the start of the game; note this is no longer a zero-sum game.
1611.01673#3
1611.01673#5
1611.01673
[ "1511.06390" ]
1611.01673#5
Generative Multi-Adversarial Networks
Part of their convergence and optimality proof involves using the oracle, D*, to reduce the minimax game to a minimization over G only: min V(D*,G) = min {C(G) = â log(4) +2 JSD(Paatallpc) } (2) where JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JS'D, however, we rarely know D* and so we instead minimize V(D, G), which is only a lower bound. This perspective of minimizing the distance between the distributions, paara and pg, motivated Li et al. (2015) to develop a generative model that matches all moments of pg(x) with Paata(x) (at optimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhao et al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generator and discriminator objectives to take real-valued â
1611.01673#4
1611.01673#6
1611.01673
[ "1511.06390" ]
1611.01673#6
Generative Multi-Adversarial Networks
energiesâ as input instead of probabilities. Nowozin et al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANS to more general divergences, specifically f-divergences and then Bregman-divergences respectively. In general, these approaches focus on exploring fundamental reformulations of V (D, G). Similarly, our work focuses on a fundamental reformulation, however, our aim is to provide a framework that accelerates training of the generator to a more robust state irrespective of the choice of V. 2.1 GMAN: A MULTI-ADVERSARIAL EXTENSION We propose introducing multiple discriminators, which brings with it a number of design possibil- ities. We explore approaches ranging between two extremes: 1) a more discriminating D (better approximating maxp V(D,G)) and 2) a D better matched to the generatorâ s capabilities. Math- ematically, we reformulate Gâ s objective as ming max F(V(D1,G),...,V(Dwy,G)) for different choices of Fâ (see Figure 1). Each D; is still expected to independently maximize its own V(D;, G) (i.e. no cooperation). We sometimes abbreviate V (D;,G) with V; and F(Vi,..., Vx) with Fg(Vi).
1611.01673#5
1611.01673#7
1611.01673
[ "1511.06390" ]
1611.01673#7
Generative Multi-Adversarial Networks
# 3 A FORMIDABLE ADVERSARY Here, we consider multi-discriminator variants that attempt to better approximate maxp V (D,G), providing a harsher critic to the generator. Published as a conference paper at ICLR 2017 G FO) ee VO,.6) (0,6) Or D, Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If F := max, G trains against the best discriminator. If F := mean, G trains against an ensemble. We explore other alternatives to F' in Sections 4.1 & 4.4 that improve on both these options. 3.1 MAXIMIZING V(D,G) For a fixed G, maximizing F¢(V;) with F := max and N randomly instantiated copies of our dis- criminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with random restarts in parallel and then presenting maxjef1,___,.v} V (Dj, G) as the loss to the generator â a very pragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net. Requiring the generator to minimize the max forces G to generate high fidelity samples that must hold up under the scrutiny of all V discriminators, each potentially representing a distinct max. In practice, maxp,ep V(D;,G) is not performed to convergence (or global optimality), so the above problem is oversimplified. Furthermore, introducing N discriminators affects the dynam- ics of the game which affects the trajectories of the discriminators. This prevents us from claiming max{V1(t),..., Viv(t)} > max{Vj (t)} Vt even if we initalize D,(0) = D{,(0) as it is unlikely that D,(t) = D{(t) at some time ¢ after the start of the game. 3.2 BOOSTING We can also consider taking the max over NV discriminators as a form of boosting for the discrim- inatorâ
1611.01673#6
1611.01673#8
1611.01673
[ "1511.06390" ]
1611.01673#8
Generative Multi-Adversarial Networks
s online classification problem (online because G' can produce an infinite data stream). The boosted discriminator is given a sample x; and must predict whether it came from the generator or the dataset. The booster then makes its prediction using the predictions of the N weaker D;. There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1, our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2, many boosting algorithms more generally use linear combinations of the discriminators. Moreover, in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume access to the loss function at prediction time, which allows us to compute the max. It is possible to train the weak discriminators using boosting and then ignore the boosterâ s prediction by instead presenting max{V;}. We explore both variants in our experiments, using the adaptive al- gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promising results on the image generation tasks. It is possible that boosting produces too strong an adversary for learning which motivates the next section. Boosting results appear in Appendix A.7.
1611.01673#7
1611.01673#9
1611.01673
[ "1511.06390" ]
1611.01673#9
Generative Multi-Adversarial Networks
# 4 A FORGIVING TEACHER The previous perspectives focus on improving the discriminator with the goal of presenting a better approximation of maxp V(D,G) to the generator. Our next perspective asks the question, â Is max V (D, G) too harsh a critic?â 4.1 Soft-DISCRIMINATOR In practice, training against a far superior discriminator can impede the generatorâ s learning. This is because the generator is unlikely to generate any samples considered â realisticâ by the discrimi- natorâ s standards, and so the generator will receive uniformly negative feedback. This is problem-
1611.01673#8
1611.01673#10
1611.01673
[ "1511.06390" ]
1611.01673#10
Generative Multi-Adversarial Networks
Published as a conference paper at ICLR 2017 atic because the information contained in the gradient derived from negative feedback only dictates where to drive down pg(x), not specifically where to increase pg (x). ala) driving down pa(2) necessarily increases pg(a) in other regions of Â¥ (to maintain [, pq(x) = 1) which may or may not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is more likely to see positive feedback against a more lenient discriminator, which may better guide a generator towards amassing pg(«) in approximately correct regions of 1â
1611.01673#9
1611.01673#11
1611.01673
[ "1511.06390" ]
1611.01673#11
Generative Multi-Adversarial Networks
. For this reason, we explore a variety of functions that allow us to soften the max operator. We choose to focus on soft versions of the three classical Pythagorean means parameterized by \ where X = 0 corresponds to the mean and the max is recovered as \ > 00: AMsogt(V, A) -Su Vi (3) GMoopt(V, A) = â exp (>. w; log(â Vi) (4) HM,of+(V, A) -( wil, oy (5) where w; = Vi /DjeV with \ > 0, V; < 0.
1611.01673#10
1611.01673#12
1611.01673
[ "1511.06390" ]
1611.01673#12
Generative Multi-Adversarial Networks
Using a softmax also has the well known advantage of being differentiable (as opposed to subdifferentiable for max). Note that we only require continuity to guarantee that computing the softmax is actually equivalent to computing V(D,G) where D is some convex combination of D; (see Appendix A.5). 4.2 USING THE ORIGINAL MINIMAX OBJECTIVE To illustrate the effect the softmax has on training, observe that the component of AM,, ft(V, 0) relevant to generator training can be rewritten as x y Be~po(s | log(l â Di(2))] = 7Be~pote)| loa(2)]. (6) where z = ma â D;(x)). Note that the generator gradient, | 2icate)), is minimized at z = 1 over E (0, yi. From this form, it is clear that z = 1 if and only if D; OVi, so G only receives a vanishing gradient if all D; agree that the sample is fake; this is especially unlikely for large N. In other words, G only needs to fool a single D; to receive constructive feedback. This result allows the generator to successfully minimize the original generator objective, log(1 â D). This is in contrast to the more popular â
1611.01673#11
1611.01673#13
1611.01673
[ "1511.06390" ]
1611.01673#13
Generative Multi-Adversarial Networks
log(D) introduced to artificially enhance gradients at the start of training. At the beginning of training, when maxp, V (Dj, G) is likely too harsh a critic for the generator, we can set A closer to zero to use the mean, increasing the odds of providing constructive feedback to the generator. In addition, the discriminators have the added benefit of functioning as an ensemble, reducing the variance of the feedback presented to the generator, which is especially important when the discriminators are far from optimal and are still learning a reasonable decision boundary. As training progresses and the discriminators improve, we can increase \ to become more critical of the generator for more refined training.
1611.01673#12
1611.01673#14
1611.01673
[ "1511.06390" ]
1611.01673#14
Generative Multi-Adversarial Networks
4.3. MAINTAINING MULTIPLE HYPOTHESES We argue for this ensemble approach on a more fundamental level as well. Here, we draw on the density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof assumes we have access to Paata(), if only implicitly. In most cases of interest, the discriminator only has access to a finite dataset sampled from pyata(x); therefore, when computing expectations of V(D,G), we only draw samples from our finite dataset. This is equivalent to training a GAN with Paata(%) = Pdata Which is a distribution consisting of point masses on all the data points in the dataset.
1611.01673#13
1611.01673#15
1611.01673
[ "1511.06390" ]
1611.01673#15
Generative Multi-Adversarial Networks
For the sake of argument, letâ s assume we are training a discriminator and generator, each 'VeV= -y; Be oP: Thi â D;)= -t OP r for D, = 1, Dzx = 0. Our argument ignores OP e . Published as a conference paper at ICLR 2017 with infinite capacity. In this case, the global optimum (pq(x) = Pdata(z)) fails to capture any of the interesting structure from pyata(x), the true distribution we are trying to learn. Therefore, it is actually critical that we avoid this global optimum.
1611.01673#14
1611.01673#16
1611.01673
[ "1511.06390" ]
1611.01673#16
Generative Multi-Adversarial Networks
> x Figure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre- sponding probability mass function is given in light gray. After training GMAN, three discrimina- tors converge to distinct local optima which implicitly define distributions over the data (red, blue, yellow). Each discriminator may specialize in discriminating a region of the data space (placing more diffuse mass in other regions). Averaging over the three discriminators results in the distribu- tion in black, which we expect has higher likelihood under reasonable assumptions on the structure of the true distribution. In practice, this degenerate result is avoided by employing learners with limited capacity and corrupt- ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously training a variety of limited capacity discriminators. With this approach, we might obtain a diverse set of seemingly tenable hypotheses for the true paata(#). Averaging over these multiple locally optimal discriminators increases the entropy of Pdara(2) by diffusing the probability mass over the data space (see Figure 2 for an example).
1611.01673#15
1611.01673#17
1611.01673
[ "1511.06390" ]
1611.01673#17
Generative Multi-Adversarial Networks
4.4. AUTOMATING REGULATION The problem of keeping the discriminator and generator in balance has been widely recognized in previous work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col- lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of classification accuracy (producing a single scalar) before the generator has made sufficient progress on the arguably more difficult generative task (producing a high dimensional sample). Salimans et al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively superior discriminator. Here, we explore an approach that enables the generator to automatically temper the performance of the discriminator when necessary, but still encourages the generator to challenge itself against more accurate adversaries. Specifically, we augment the generator objective: gain, Fe(Vi) ~ FQ) @) where f(A) is monotonically increasing in \ which appears in the softmax equations, (3)â (5). In experiments, we simply set f(A) = cA with c a constant (e.g., 0.001). The generator is incentivized to increase \ to reduce its objective at the expense of competing against the best available adversary D* (see Appendix A.6).
1611.01673#16
1611.01673#18
1611.01673
[ "1511.06390" ]
1611.01673#18
Generative Multi-Adversarial Networks
# 5 EVALUATION Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) report log likelihood estimates from Gaussian Parzen windows, which they admit, has high variance and is known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzen windows and argue that generative models should be evaluated with respect to their intended appli- cation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the dataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak- ing pairwise comparisons between independently trained GAN models. The core idea behind their approach is given two generator, discriminator pairs (G1, D) and (G2, D2), we should be able to learn their relative performance by judging each generator under the opponentâ s discriminator.
1611.01673#17
1611.01673#19
1611.01673
[ "1511.06390" ]
1611.01673#19
Generative Multi-Adversarial Networks
Published as a conference paper at ICLR 2017 5.1 METRIC In GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform the swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric (GMAM), that is amenable to training with multiple discriminators, FE, V") FeV) FEA)! FB, (V2) GMAM = log ( (8) where a and b refer to the two GMAN variants (see Section 3 for notation F¢(V;)).
1611.01673#18
1611.01673#20
1611.01673
[ "1511.06390" ]
1611.01673#20
Generative Multi-Adversarial Networks
The idea here is similar. If G2 performs better than G, with respect to both D, and D2, then GMAM>0 (remember V <0 always). If G; performs better in both cases, GMAM<0, otherwise, the result is indeterminate. 5.2 EXPERIMENTS We evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST (LeCun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus on rates of convergence to steady state along with quality of the steady state generator according to the GMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare
1611.01673#19
1611.01673#21
1611.01673
[ "1511.06390" ]
1611.01673#21
Generative Multi-Adversarial Networks
e F-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7). e P-boost: Dj; is trained according to AdaBoost.OL. A max over the weak learner losses is presented to the generator instead of the boosted prediction (see Appendix A.7). e GMAN-max: max{V;} is presented to the generator. e GAN: Standard GAN with a single discriminator (see Appendix A.2). mod-GAN: GAN with modified objective (generator minimizes â log(D(G(z))). # e e GMAN-): GMAN with F :=arithmetic softmax with parameter \. GMAN*: The arithmetic softmax is controlled by the generator through X. # e All generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)), and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch normalization (loffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their networks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax objective (« + Ss ).
1611.01673#20
1611.01673#22
1611.01673
[ "1511.06390" ]
1611.01673#22
Generative Multi-Adversarial Networks
See Appendix A.8 for further details. We test GMAN systems with N = {2,5} discriminators. We maintain discriminator diversity by varying dropout and network depth. # 5.2.1 MNIST Figure 3 reveals that increasing the number of discriminators reduces the number of iterations to steady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the added benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari- ance of the same objective over a sliding time window, reaffirming GMANâ s acceleration to steady- state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an epoch before the single discriminator run; digits at steady-state appear slightly sharper as well. Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN* achieving the best overall performance. Figure 6 reveals GMAN*â
1611.01673#21
1611.01673#23
1611.01673
[ "1511.06390" ]
1611.01673#23
Generative Multi-Adversarial Networks
s attempt to regulate the difficulty Score | Variant | GMAN* | GMAN-0 | GMAN-max | _mod-GAN + 0.127 GMAN* - â 0.020 + 0.009 | â 0.028 + 0.019 | â 0.089 + 0.036 + | 0.007 GMAN-0_ | 0.020 + 0.009 - â 0.013 + 0.015 0.027 3 | â 0.034 | GMAN-max | 0.028+ 0.019 | 0.013 + 0.015 - â 0.011 + 0.024 | â 0.122 mod-GAN 0.089 + 0.036 | 0.018 + 0.027 0.011 + 0.024 - Table 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each variantâ s column. Published as a conference paper at ICLR 2017 1 original 1 modified 2 1000 2000 3000 4000 5000 6000 Iteration # Figure 3: Generator objective, F', averaged over 5 training runs on MNIST. Increas- ing the number of discriminators accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +1o). Figure 4 provides alternative evidence of GMAN*â s accelerated convergence. 1 Discriminator 2 Discriminators Cumulative STD of F(V(D,G)) e is} 3 Q 1000 2000 3000 4000 5000 6000 Iteration # Figure 4: Stdev, o, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN* with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 3â s filled shadows reveal stdev of F' over runs, while this plot shows stdev over time. PFs fey ela "epoch 2 epochs 3 epochs 5 epochs 10 epochs 5 Discriminators Figure 5:
1611.01673#22
1611.01673#24
1611.01673
[ "1511.06390" ]
1611.01673#24
Generative Multi-Adversarial Networks
Comparison of image quality across epochs for NV = {1, 2,5} using GMAN-O on MNIST. of the game to accelerate learning. Figure 7 displays the GMAM scores comparing variable \ controlled by GMAN*. 0 2000 4000 6000 8000 1000012000 Iteration # Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise fixed \â s to the Score »* A=1 A=0 5) > =0.008 | â 0.019 tT 0.028 =0.009 | £0.010 oO 5 _ 0.008 - =0.008 a 0.001 A=1 =0.009 =0.010 _ 5 _ 0.019 0.008 - 0.025 | A=0 0.010 | £0.010 GMAM sde(CMAMD for GMAN-) and game by adjusting A. Initially, G reduces o_ GMAN* (\*) over 5 runs on MNIST. ease learning and then gradually increases \ for a more challenging learning environment. Published as a conference paper at ICLR 2017 5.2.2 CELEBA & CIFAR-10 We see similar accelerated convergence behavior for the CelebA dataset in Figure 8.
1611.01673#23
1611.01673#25
1611.01673
[ "1511.06390" ]
1611.01673#25
Generative Multi-Adversarial Networks
OF = HER RSAlAdeam alae > a2 ~~ 3 A SeSea dg age Aaa ae AB eta 5 Beds eS eoap-â ANAC HEO ANE atone 1 Discriminator 2 Discriminators 3 Discriminators Figure 8: Image quality improvement across number of generators at same number of iterations for GMAN-0 on CelebA. Figure 9 displays images generated by GMAN-0 on CIFAR-10.
1611.01673#24
1611.01673#26
1611.01673
[ "1511.06390" ]
1611.01673#26
Generative Multi-Adversarial Networks
See Appendix A.3 for more results. ted | Pte de Pe SS a Eee Er 4 OY" 8. BGe-Baee een én Generated Images Real Images Figure 9: Images generated by GMAN-O on the CIFAR-10 dataset. We also found that GMAN is robust to mode collapse. We believe this is because the generator must appease a diverse set of discriminators in each minibatch. Emitting a single sample will score well for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size. 6 CONCLUSION We introduced multiple discriminators into the GAN framework and explored discriminator roles ranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically tune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In general, GMAN variants achieved faster convergence to a higher quality steady state on a variety of tasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original GAN objective possible by increasing the odds of the generator receiving constructive feedback. In future work, we will look at more sophisticated mechanisms for letting the generator control the game as well as other ways to ensure diversity among the discriminators. Introducing multiple generators is conceptually an obvious next step, however, we expect difficulties to arise from more complex game dynamics. For this reason, game theory and game design will likely be important.
1611.01673#25
1611.01673#27
1611.01673
[ "1511.06390" ]
1611.01673#27
Generative Multi-Adversarial Networks
ACKNOWLEDGMENTS We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel, Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40 GPU. This material is based upon work supported by the National Science Foundation under Grant Nos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
1611.01673#26
1611.01673#28
1611.01673
[ "1511.06390" ]
1611.01673#28
Generative Multi-Adversarial Networks
Published as a conference paper at ICLR 2017 # BIBLIOGRAPHY Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016. Hana Ajakan, Pascal Germain, Hugo Larochelle, Frangois Laviolette, and Mario Marchand. Domain-adversarial neural networks. arXiv preprint arXiv: 1412.4446, 2014. J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference On Artificial Intelligence, volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005. Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for online boosting. arXiv preprint arXiv: 1502.02651, 2015. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel.
1611.01673#27
1611.01673#29
1611.01673
[ "1511.06390" ]
1611.01673#29
Generative Multi-Adversarial Networks
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv: 1606.03657, 2016. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486-1494, 2015. Jeff Donahue, Philipp Krahenbiihl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv: 1605.09782, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv: 1606.00704, 2016. Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
1611.01673#28
1611.01673#30
1611.01673
[ "1511.06390" ]
1611.01673#30
Generative Multi-Adversarial Networks
Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014, 2014. Tan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.
1611.01673#29
1611.01673#31
1611.01673
[ "1511.06390" ]
1611.01673#31
Generative Multi-Adversarial Networks
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprint arXiv: 1606.03476, 2016. Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv: 1602.05110, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv: 1502.03167, 2015.
1611.01673#30
1611.01673#32
1611.01673
[ "1511.06390" ]
1611.01673#32
Generative Multi-Adversarial Networks
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky. Learning multiple layers of features from tiny images. Masterâ s Thesis, 2009. Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998. Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Jnterna- tional Conference on Machine Learning, pp. 1718-1727, 2015. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Published as a conference paper at ICLR 2017 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv: 1511.05644, 2015. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv: 1606.00709, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434, 2015. Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos. Enabling dark energy science with deep generative models of galaxy images. arXiv preprint arXiv: 1609.05796, 2016. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
1611.01673#31
1611.01673#33
1611.01673
[ "1511.06390" ]
1611.01673#33
Generative Multi-Adversarial Networks
Improved techniques for training gans. arXiv preprint arXiv: 1606.03498, 2016. Jiirgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992. Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015. Lucas Theis, Adron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv: 1511.01844v3, 2016. Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016. Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domain transfer. arXiv preprint arXiv: 1603.07442, 2016. Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528-2535. IEEE, 2010. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv: 1609.03126, 2016.
1611.01673#32
1611.01673#34
1611.01673
[ "1511.06390" ]
1611.01673#34
Generative Multi-Adversarial Networks
10 Published as a conference paper at ICLR 2017 A APPENDIX A.1 ACCELERATED CONVERGENCE & REDUCED VARIANCE See Figures 10, 11, 12, and 13. ~*""9 2000 4000 6000 8000 1000012000 Iteration # Figure 10: Generator objective, Fâ , averaged over 5 training runs on CelebA. Increasing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its vari- ance, a? (filled shadow +1¢). Figure 11 pro- vides alternative evidence of GMAN-Oâ s ac- celerated convergence. N=1 Original 1 Modified N=2,\=0 N=2,A=1 FAMD.G)) 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 12: Generator objective, Fâ , averaged over 5 training runs on CIFAR-10. Increas- ing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +10). Figure 13 provides alternative evidence of GMAN-0â s accelerated convergence. A.2. ADDITIONAL GMAM TABLES )) 10° | 2% 2 wee â aa aaa 107 LAN gol ore" OQ 2000 4000 6000 8000 1000012000 Iteration # a Cumulative STD of F(WD,G » Oo Figure 11: Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O0 with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 10â s filled shadows reveal stdev of F over runs, while this plot shows stdev over time. â N=1 Original = _ 1 Modified S S S102 = 10 is 2 & e $ â 107 5 E 5 3 107 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 13:
1611.01673#33
1611.01673#35
1611.01673
[ "1511.06390" ]
1611.01673#35
Generative Multi-Adversarial Networks
Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 12â s filled shadows reveal stdev of F over runs, while this plot shows stdev over time. See Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif- icantly improves scores over the standard GAN both in terms of the GMAM metric and Inception scores. A.3 GENERATED IMAGES See Figures 14 and 15. Published as a conference paper at ICLR 2017 Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. | Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â 0.022 â 0.062 â 0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â 0.078 3 | â 0.055 | GMAN* 0.062 â 0.006 - â 0.001 | â 0.167 | mod-GAN 0.088 0.078 0.001 - Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with two discriminators. GMAN-0 | GMAN-1 | mod-GAN_| _GMAN* Score | 5.878 = 0.193 | 5.765 £ 0.108 | 5.738 £ 0.176 | 5.539 + 0.099 Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better.
1611.01673#34
1611.01673#36
1611.01673
[ "1511.06390" ]
1611.01673#36
Generative Multi-Adversarial Networks
GMAN variants were trained with two discriminators. | Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â 0.008 â 0.041 â 0.132 5) 0.122 GMAN* 0.008 - â 0.038 â 0.092 3} 0.010 | GMAN-1 0.041 0.038 - â 0.089 | â 0.313 | mod-GAN 0.132 0.092 0.089 - Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with five discriminators. | GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 £0.153 | 5.738 £0.176 Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with five discriminators. ERE Bae eae HeEBCA Bae PRREe Bee Ea Ga Ge ba ee Ee T= |S] esl | 1 Discriminator 5 discriminator GMAN* 5 discriminator GMAN -0 Figure 14: Sample of pictures generated on CelebA cropped dataset. 12 Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â 0.007 | â 0.040 | â 0.020 â 0.028 â 0.089 0.067 GMAN-1 0.007 - â 0.008 | â 0.008 â 0.021 â 0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â 0.018 â 0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â 0.013 â 0.018 | â
1611.01673#35
1611.01673#37
1611.01673
[ "1511.06390" ]
1611.01673#37
Generative Multi-Adversarial Networks
0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â 0.011 â 0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 - Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â 0.007 | â 0.040 | â 0.020 â 0.028 â 0.089 0.067 GMAN-1 0.007 - â 0.008 | â 0.008 â 0.021 â 0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â 0.018 â 0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â 0.013 â 0.018 | â 0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â 0.011 â 0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 - | Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â 0.022 â 0.062 â 0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â 0.078 3 | â 0.055 | GMAN* 0.062 â 0.006 - â 0.001 | â 0.167 | mod-GAN 0.088 0.078 0.001 - | Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â 0.008 â 0.041 â 0.132 5) 0.122 GMAN* 0.008 - â 0.038 â 0.092 3} 0.010 | GMAN-1 0.041 0.038 - â 0.089 | â 0.313 | mod-GAN 0.132 0.092 0.089 -
1611.01673#36
1611.01673#38
1611.01673
[ "1511.06390" ]
1611.01673#38
Generative Multi-Adversarial Networks
| GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 £0.153 | 5.738 £0.176 Published as a conference paper at ICLR 2017 ie TT se BERRA Hees ST tet tet tt He®tis«abiiass afe4S8880 ee Ps | ASABE ERE Bia aneaee ES aecihe GG. eae eet LF ite ed eR Dt tet bl ae. PRES SEES E*2LRe VAR RMGEEEA fhoavesete Basak eba det a de Pate | SR ZEaa2ER8 w EAM he 4g2a8 Real Images Generated Images Figure 15: Sample of pictures generated by GMAN-O on CIFAR dataset. A.4. SOMEWHAT RELATED WORK A GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica- ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g., X = {X, = Domain 1, ¥% = Domain 2,...}). In contrast, our framework applies to an unsu- pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis- criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribing anew discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero (2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and not a discriminator for each of the possibly exponentially many conditional labels. In Section 4.4, we describe an approach to customize adversarial training to better suit the devel- opment of the generator. An approach with similar conceptual underpinnings was described in Ravanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervised scenario whereas our applies to the unsupervised case.
1611.01673#37
1611.01673#39
1611.01673
[ "1511.06390" ]
1611.01673#39
Generative Multi-Adversarial Networks
A.5 Softmax REPRESENTABILITY Let softmax(V;) = Ve {miny,,maxy,]. Also let a = arg min; V;, b = arg max; V;, and V(t) = V((1 â t)Da + tDp) so that V(0) = V, and V(1) = Vy. The softmax and minimax objective V (Dj, G) are both continuous in their inputs, so by the intermediate value theorem, we have that 3¢ â ¬ [0,1] st. V(é) = V, which implies 3D â ¬ D s.t. V(D,G) = V. This result implies that the softmax (and any other continuous substitute) can be interpreted as returning v(d, G) for some D selected by computing an another, unknown function over the space of the discriminators. This result holds even if D is not representable by the architecture chosen for Dâ s neural network. 13 Published as a conference paper at ICLR 2017 A.6 UNCONSTRAINED OPTIMIZATION To convert GMAN* minimax formulation to an unconstrained minimax formulation, we introduce an auxiliary variable, A, define \(A) = log(1 + e*), and let the generator minimize over A â
1611.01673#38
1611.01673#40
1611.01673
[ "1511.06390" ]
1611.01673#40
Generative Multi-Adversarial Networks
¬ R. A.7 BOOSTING WITH AdaBoost.OL AdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learnerâ s slight edge over random guessing (P(correct label) = 0.5 + y â ¬ (0,0.5]), and in fact, allows 7 < 0. This is crucial because our weak learners are deep nets with unknown, possibly negative, 7yâ s. BIBI 22] 4] 4) ele BBaAAoo BBanAoo AMMO Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similar results with P-boost). ee Poe fo [| SEE fo: fo: fo: fo" A.8 EXPERIMENTAL SETUP All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)). We use convolutional transpose layers (Zeiler et al. (2010)) for G and strided convolutions for D except for the input of G' and the last layer of D. We use the single step gradient method as in (Nowozin et al. (2016)), and batch normalization (loffe & Szegedy (2015)) was used in each of the generator layers. The different discriminators were trained with varying dropout rates from (0.3, 0.7]. Variations in the discriminators were effected in two ways. We varied the architecture by varying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), as well as varying dropout rates. Secondly we also decorrelated the samples that the disriminators were training on by splitting the minibatch across the discriminators. The code was written in Tensorflow (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are: ¢ Generator latent variables z ~ U/ (â
1611.01673#39
1611.01673#41
1611.01673
[ "1511.06390" ]
1611.01673#41
Generative Multi-Adversarial Networks
1,1)'°° e Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) e Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128). e Variants have either convolution 3 (4,4,128) removed or all the filter sizes are divided by 2 or 4. That is, (32,32, 1), (16,16, 16) , (8,8, 32) ,(4,4,64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16) , (4, 4, 32). e ReLu activations for all the hidden units. Tanh activation at the output units of the generator. Sigmoid at the output of the Discriminator. e Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 x 10-4, 6, = 0.5). e MNIST was trained for 20 epochs with a minibatch of size 100. e CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100. 14
1611.01673#40
1611.01673
[ "1511.06390" ]
1611.01436#0
Learning Recurrent Span Representations for Extractive Question Answering
7 1 0 2 r a M 7 1 ] L C . s c [ 2 v 6 3 4 1 0 . 1 1 6 1 : v i X r a # LEARNING RECURRENT SPAN REPRESENTATIONS FOR EXTRACTIVE QUESTION ANSWERING Kenton Leet, Shimi Salant*, Tom Kwiatkowksi', Ankur Parikh?, Dipanjan Das?, and Jonathan Berant* [email protected], [email protected] {tomkwiat, aparikh, dipanjand}@google.com, [email protected] tUniversity of Washington, Seattle, USA *Tel-Aviv University, Tel-Aviv, Israel tGoogle Research, New York, USA # ABSTRACT
1611.01436#1
1611.01436
[ "1608.07905" ]
1611.01436#1
Learning Recurrent Span Representations for Extractive Question Answering
The reading comprehension task, that asks questions about a given evidence docu- ment, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-deï¬ ned manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the an- swers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efï¬ ciently builds ï¬ xed length representations of all spans in the evidence document with a re- current network. We show that scoring explicit span representations signiï¬ cantly improves performance over other approaches that factor the prediction into sep- arate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.â s baseline by > 50%. # INTRODUCTION
1611.01436#0
1611.01436#2
1611.01436
[ "1608.07905" ]
1611.01436#2
Learning Recurrent Span Representations for Extractive Question Answering
A primary goal of natural language processing is to develop systems that can answer questions about the contents of documents. The reading comprehension task is of practical interest â we want computers to be able to read the worldâ s text and then answer our questions â and, since we believe it requires deep language understanding, it has also become a ï¬ agship task in NLP research. A number of reading comprehension datasets have been developed that focus on answer selection from a small set of alternatives deï¬ ned by annotators (Richardson et al., 2013) or existing NLP pipelines that cannot be trained end-to-end (Hill et al., 2016; Hermann et al., 2015). Subsequently, the models proposed for this task have tended to make use of the limited set of candidates, basing their predictions on mention-level attention weights (Hermann et al., 2015), or centering classi- ï¬ ers (Chen et al., 2016), or network memories (Hill et al., 2016) on candidate locations. Recently, Rajpurkar et al. (2016) released the less restricted SQUAD dataset1 that does not place any constraints on the set of allowed answers, other than that they should be drawn from the evidence document. Rajpurkar et al. proposed a baseline system that chooses answers from the constituents identiï¬ ed by an existing syntactic parser. This allows them to prune the O(N 2) answer candidates in each document of length N , but it also effectively renders 20.7% of all questions unanswerable. Subsequent work by Wang & Jiang (2016) signiï¬ cantly improve upon this baseline by using an end- to-end neural network architecture to identify answer spans by labeling either individual words, or the start and end of the answer span. Both of these methods do not make independence assumptions about substructures, but they are susceptible to search errors due to greedy training and decoding.
1611.01436#1
1611.01436#3
1611.01436
[ "1608.07905" ]
1611.01436#3
Learning Recurrent Span Representations for Extractive Question Answering
1http://stanford-qa.com 1 In contrast, here we argue that it is beneï¬ cial to simplify the decoding procedure by enumerating all possible answer spans. By explicitly representing each answer span, our model can be globally normalized during training and decoded exactly during evaluation. A naive approach to building the O(N 2) spans of up to length N would require a network that is cubic in size with respect to the passage length, and such a network would be untrainable. To overcome this, we present a novel neural architecture called RASOR that builds ï¬ xed-length span representations, reusing recurrent computations for shared substructures. We demonstrate that directly classifying each of the competing spans, and training with global normalization over all possible spans, leads to a signiï¬
1611.01436#2
1611.01436#4
1611.01436
[ "1608.07905" ]
1611.01436#4
Learning Recurrent Span Representations for Extractive Question Answering
cant increase in performance. In our experiments, we show an increase in performance over Wang & Jiang (2016) of 5% in terms of exact match to a reference answer, and 3.6% in terms of predicted answer F1 with respect to the reference. On both of these metrics, we close the gap between Rajpurkar et al.â s baseline and the human-performance upper-bound by > 50%. 2 EXTRACTIVE QUESTION ANSWERING 2.1 TASK DEFINITION Extractive question answering systems take as input a question q = {qo,..., qn} and a passage of text p = {po,...,Pm} from which they predict a single answer span a = (Astart, Gena), fepresented as a pair of indices into p. Machine learned extractive question answering systems, such as the one presented here, learn a predictor function f(q, p) > a from a training dataset of (q, p, a) triples.
1611.01436#3
1611.01436#5
1611.01436
[ "1608.07905" ]
1611.01436#5
Learning Recurrent Span Representations for Extractive Question Answering
2.2 RELATED WORK For the SQUAD dataset, the original paper from Rajpurkar et al. (2016) implemented a linear model with sparse features based on n-grams and part-of-speech tags present in the question and the can- didate answer. Other than lexical features, they also used syntactic information in the form of de- pendency paths to extract more general features. They set a strong baseline for following work and also presented an in depth analysis, showing that lexical and syntactic features contribute most strongly to their modelâ
1611.01436#4
1611.01436#6
1611.01436
[ "1608.07905" ]
1611.01436#6
Learning Recurrent Span Representations for Extractive Question Answering
s performance. Subsequent work by Wang & Jiang (2016) use an end-to-end neural network method that uses a Match-LSTM to model the question and the passage, and uses pointer networks (Vinyals et al., 2015) to extract the answer span from the passage. This model resorts to greedy decoding and falls short in terms of performance compared to our model (see Sec- tion 5 for more detail). While we only compare to published baselines, there are other unpublished competitive systems on the SQUAD leaderboard, as listed in footnote 4.
1611.01436#5
1611.01436#7
1611.01436
[ "1608.07905" ]
1611.01436#7
Learning Recurrent Span Representations for Extractive Question Answering
A task that is closely related to extractive question answering is the Cloze task (Taylor, 1953), in which the goal is to predict a concealed span from a declarative sentence given a passage of supporting text. Recently, Hermann et al. (2015) presented a Cloze dataset in which the task is to predict the correct entity in an incomplete sentence given an abstractive summary of a news article. Hermann et al. also present various neural architectures to solve the problem. Although this dataset is large and varied in domain, recent analysis by Chen et al. (2016) shows that simple models can achieve close to the human upper bound. As noted by the authors of the SQUAD paper, the annotated answers in the SQUAD dataset are often spans that include non-entities and can be longer phrases, unlike the Cloze datasets, thus making the task more challenging. Another, more traditional line of work has focused on extractive question answering on sentences, where the task is to extract a sentence from a document, given a question. Relevant datasets include datasets from the annual TREC evaluations (Voorhees & Tice, 2000) and WikiQA (Yang et al., 2015), where the latter dataset speciï¬ cally focused on Wikipedia passages. There has been a line of interesting recent publications using neural architectures, focused on this variety of extractive question answering (Tymoshenko et al., 2016; Wang et al., 2016, inter alia). These methods model the question and a candidate answer sentence, but do not focus on possible candidate answer spans that may contain the answer to the given question. In this work, we focus on the more challenging problem of extracting the precise answer span.
1611.01436#6
1611.01436#8
1611.01436
[ "1608.07905" ]
1611.01436#8
Learning Recurrent Span Representations for Extractive Question Answering
2 # 3 MODEL We propose a model architecture called RASOR2 illustrated in Figure 1, that explicitly computes embedding representations for candidate answer spans. In most structured prediction problems (e.g. sequence labeling or parsing), the number of possible output structures is exponential in the input length, and computing representations for every candidate is prohibitively expensive. However, we exploit the simplicity of our task, where we can trivially and tractably enumerate all candidates. This facilitates an expressive model that computes joint representations of every answer span, that can be globally normalized during learning. In order to compute these span representations, we must aggregate information from the passage and the question for every answer candidate. For the example in Figure 1, RASOR computes an embedding for the candidate answer spans: ï¬
1611.01436#7
1611.01436#9
1611.01436
[ "1608.07905" ]
1611.01436#9
Learning Recurrent Span Representations for Extractive Question Answering
xed to, ï¬ xed to the, to the, etc. A naive approach for these aggregations would require a network that is cubic in size with respect to the passage length. Instead, our model reduces this to a quadratic size by reusing recurrent computations for shared substructures (i.e. common passage words) from different spans. Since the choice of answer span depends on the original question, we must incorporate this infor- mation into the computation of the span representation. We model this by augmenting the passage word embeddings with additional embedding representations of the question. In this section, we motivate and describe the architecture for RASOR in a top-down manner. 3.1 SCORING ANSWER SPANS The goal of our extractive question answering system is to predict the single best answer span among all candidates from the passage p, denoted as A(p).
1611.01436#8
1611.01436#10
1611.01436
[ "1608.07905" ]
1611.01436#10
Learning Recurrent Span Representations for Extractive Question Answering
Therefore, we deï¬ ne a probability distribution over all possible answer spans given the question q and passage p, and the predictor function ï¬ nds the answer span with the maximum likelihood: f (q, p) := argmax aâ A(p) One might be tempted to introduce independence assumptions that would enable cheaper decoding. For example, this distribution can be modeled as (1) a product of conditionally independent distribu- tions (binary) for every word or (2) a product of conditionally independent distributions (over words) for the start and end indices of the answer span. However, we show in Section 5.2 that such inde- pendence assumptions hurt the accuracy of the model, and instead we only assume a ï¬ xed-length representation ha of each candidate span that is scored and normalized with a softmax layer (Span score and Softmax in Figure 1): a â A(p) # Sa = Wa FFNN(ha) _ exp(Sa) = 5 Texplaa) _ exp(Sa) P| 4B) = 5 Texplaa) ac A(p) (3) where FFNN(·) denotes a fully connected feed-forward neural network that provides a non-linear mapping of its input embedding. 3.2 RASOR: RECURRENT SPAN REPRESENTATION The previously defined probability distribution depends on the answer span representations, ha. When computing ha, we assume access to representations of individual passage words that have been augmented with a representation of the question. We denote these question-focused passage word embeddings as {pj,...,p%*,} and describe their creation in Section In order to reuse computation for shared substructures, we use a bidirectional LSTM (Hochreiter & Schmidhuber| allows us to simply concatenate the bidirectional LSTM (BiLSTM) outputs at the endpoints of a span to jointly encode its inside and outside information (Span embedding in Figure[I}: {py ,--+;Pim} = BILSTM({pj, --- Pm }) (4) | (starts Gend) â
1611.01436#9
1611.01436#11
1611.01436
[ "1608.07905" ]
1611.01436#11
Learning Recurrent Span Representations for Extractive Question Answering
¬ A(p) (5) start 2An abbreviation for Recurrent Span Representations, pronounced as razor. 3 (2) where BILSTM(-) denotes a BiLSTM over its input embedding sequence and p*â is the concatenation of forward and backward outputs at time-step 7. While the visualization in Figure}1]shows a single layer BiLSTM for simplicity, we use a multi-layer BiLSTM in our experiments. The concatenated output of each layer is used as input for the subsequent layer, allowing the upper layers to depend on the entire passage. 3.3 QUESTION-FOCUSED PASSAGE WORD EMBEDDING Computing the question-focused passage word embeddings {pâ m} requires integrating ques- tion information into the passage. The architecture for this integration is ï¬ exible and likely depends on the nature of the dataset. For the SQUAD dataset, we ï¬ nd that both passage-aligned and passage- independent question representations are effective at incorporating this contextual information, and experiments will show that their beneï¬ ts are complementary. To incorporate these question rep- resentations, we simply concatenate them with the passage word embeddings (Question-focused passage word embedding in Figure 1). We use ï¬ xed pretrained embeddings to represent question and passage words. Therefore, in the fol- lowing discussion, notation for the words are interchangeable with their embedding representations. Question-independent passage word embedding The ï¬ rst component simply looks up the pre- trained word embedding for the passage word, pi. Passage-aligned question representation In this dataset, the question-passage pairs often contain large lexical overlap or similarity near the correct answer span. To encourage the model to exploit these similarities, we include a ï¬ xed-length representation of the question based on soft-alignments with the passage word. The alignments are computed via neural attention (Bahdanau et al., 2014), and we use the variant proposed by Parikh et al. (2016), where attention scores are dot products between non-linear mappings of word embeddings.
1611.01436#10
1611.01436#12
1611.01436
[ "1608.07905" ]
1611.01436#12
Learning Recurrent Span Representations for Extractive Question Answering
1 ⠤ j ⠤ n (6) # sij = FFNN(pi) · FFNN(qj) exp(sij) k=1 exp(sik) exp(sij) . ay = Sa l<j<n (7) 0 ST exp(san) n ails _ aij (8) j=l Passage-independent question representation We also include a representation of the question that does not depend on the passage and is shared for all passage words. Similar to the previous question representation, an attention score is computed via a dot-product, except the question word is compared to a universal learned embedding rather any particular passage word. Additionally, we incorporate contextual information with a BiLSTM before aggregating the outputs using this attention mechanism. The goal is to generate a coarse-grained summary of the question that depends on word order. For- mally, the passage-independent question representation qindep is computed as follows: = BILSTM(q) 8; = Wa FFNN(q;) exp(s;) gq = 7 Vihar exp(se) {91,---+4n} = BILSTM(q) (9) 1 ⠤ j ⠤ n (10) exp(s;) . gq = l<j<n (11) 7 Vihar exp(se) n gine? = SP aja (12) j=l j=1
1611.01436#11
1611.01436#13
1611.01436
[ "1608.07905" ]
1611.01436#13
Learning Recurrent Span Representations for Extractive Question Answering
This representation is a bidirectional generalization of the question representation recently proposed by Li et al. (2016) for a different question-answering task. Given the above three components, the complete question-focused passage word embedding for pi is their concatenation: pâ 4 (9) Softmax Span score Hidden layer ï¬ xed to ï¬ xed to the to the to the turbine the turbine Span embedding Passage-level BiLSTM Question-focused passage word embedding ï¬ xed to the turbine Passage-independent question representation (3) + Question-level BiLSTM What are stators attached to ? Passage-aligned question representation (1) ï¬ xed + (2) Figure 1: A visualization of RASOR, where the question is â What are the stators attached to?â and the passage is â . . . ï¬ xed to the turbine . . . â .
1611.01436#12
1611.01436#14
1611.01436
[ "1608.07905" ]
1611.01436#14
Learning Recurrent Span Representations for Extractive Question Answering
The model constructs question-focused passage word embeddings by concate- nating (1) the original passage word embedding, (2) a passage-aligned representation of the question, and (3) a passage-independent representation of the question shared across all passage words. We use a BiLSTM over these concatenated embeddings to efï¬ ciently recover embedding representations of all possible spans, which are then scored by the ï¬ nal layer of the model. 3.4 LEARNING Given the above model speciï¬ cation, learning is straightforward. We simply maximize the log- likelihood of the correct answer candidates and backpropagate the errors end-to-end. # 4 EXPERIMENTAL SETUP We represent each of the words in the question and document using 300 dimensional GloVe embed- dings trained on a corpus of 840bn words (Pennington et al., 2014). These embeddings cover 200k words and all out of vocabulary (OOV) words are projected onto one of 1m randomly initialized 300d embeddings. We couple the input and forget gates in our LSTMs, as described in Greff et al. (2016), and we use a single dropout mask to apply dropout across all LSTM time-steps as proposed by Gal & Ghahramani (2016). Hidden layers in the feed forward neural networks use rectiï¬ ed linear units (Nair & Hinton, 2010).
1611.01436#13
1611.01436#15
1611.01436
[ "1608.07905" ]
1611.01436#15
Learning Recurrent Span Representations for Extractive Question Answering
Answer candidates are limited to spans with at most 30 words. To choose the ï¬ nal model conï¬ guration, we ran grid searches over: the dimensionality of the LSTM hidden states; the width and depth of the feed forward neural networks; dropout for the LSTMs; the number of stacked LSTM layers (1, 2, 3); and the decay multiplier [0.9, 0.95, 1.0] with which we multiply the learning rate every 10k steps. The best model uses 50d LSTM states; two-layer BiLSTMs for the span encoder and the passage-independent question representation; dropout of 0.1 throughout; and a learning rate decay of 5% every 10k steps.
1611.01436#14
1611.01436#16
1611.01436
[ "1608.07905" ]
1611.01436#16
Learning Recurrent Span Representations for Extractive Question Answering
5 All models are implemented using TensorFlow3 and trained on the SQUAD training set using the ADAM (Kingma & Ba, 2015) optimizer with a mini-batch size of 4 and trained using 10 asyn- chronous training threads on a single machine. # 5 RESULTS We train on the 80k (question, passage, answer span) triples in the SQUAD training set and report results on the 10k examples in the SQUAD development and test sets. All results are calculated using the ofï¬ cial SQUAD evaluation script, which reports exact answer match and F1 overlap of the unigrams between the predicted answer and the closest labeled answer from the 3 reference answers given in the SQUAD development set. # 5.1 COMPARISONS TO OTHER WORK Our model with recurrent span representations (RASOR) is compared to all previously published systems 4. Rajpurkar et al. (2016) published a logistic regression baseline as well as human perfor- mance on the SQUAD task. The logistic regression baseline uses the output of an existing syntactic parser both as a constraint on the set of allowed answer spans, and as a method of creating sparse features for an answer-centric scoring model. Despite not having access to any external representa- tion of linguistic structure, RASOR achieves an error reduction of more than 50% over this baseline, both in terms of exact match and F1, relative to the human performance upper bound. Dev Test System EM F1 EM F1 Logistic regression baseline Match-LSTM (Sequence) Match-LSTM (Boundary) RASOR Human 39.8 54.5 60.5 66.4 81.4 51.0 67.7 70.7 74.9 91.0 40.4 54.8 59.4 67.4 82.3 51.0 68.0 70.0 75.5 91.2 Table 1: Exact match (EM) and span F1 on SQUAD. More closely related to RASOR is the boundary model with Match-LSTMs and Pointer Networks by Wang & Jiang (2016).
1611.01436#15
1611.01436#17
1611.01436
[ "1608.07905" ]
1611.01436#17
Learning Recurrent Span Representations for Extractive Question Answering
Their model similarly uses recurrent networks to learn embeddings of each passage word in the context of the question, and it can also capture interactions between endpoints, since the end index probability distribution is conditioned on the start index. However, both training and evaluation are greedy, making their system susceptible to search errors when decoding. In contrast, RASOR can efï¬ ciently and explicitly model the quadratic number of possible answers, which leads to a 14% error reduction over the best performing Match-LSTM model. 5.2 MODEL VARIATIONS We investigate two main questions in the following ablations and comparisons. (1) How important are the two methods of representing the question described in Section 3.3? (2) What is the impact of learning a loss function that accurately reï¬ ects the span prediction task? Question representations Table 2a shows the performance of RASOR when either of the two question representations described in Section 3.3 is removed. The passage-aligned question repre- sentation is crucial, since lexically similar regions of the passage provide strong signal for relevant answer spans. If the question is only integrated through the inclusion of a passage-independent rep- resentation, performance drops drastically. The passage-independent question representation over 3www.tensorflow.org 4As of submission, other unpublished systems are shown on the SQUAD leaderboard, including Match- LSTM with Ans-Ptr (Boundary+Ensemble), Co-attention, r-net, Match-LSTM with Bi-Ans-Ptr (Boundary), Co- attention old, Dynamic Chunk Reader, Dynamic Chunk Ranker with Convolution layer, Attentive Chunker.
1611.01436#16
1611.01436#18
1611.01436
[ "1608.07905" ]
1611.01436#18
Learning Recurrent Span Representations for Extractive Question Answering
6 the BiLSTM is less important, but it still accounts for over 3% exact match and F1. The input of both of these components is analyzed qualitatively in Section 6. Question representation EM F1 Learning objective EM F1 Only passage-independent Only passage-aligned RASOR 48.7 63.1 66.4 56.6 71.3 74.9 Membership prediction BIO sequence prediction Endpoints prediction Span prediction w/ log loss 57.9 63.9 65.3 65.2 69.7 73.0 75.1 73.6 (a) Ablation of question representations. (b) Comparisons for different learning objectives given the same passage-level BiLSTM. Table 2: Results for variations of the model architecture presented in Section 3. Learning objectives Given a ï¬ xed architecture that is capable of encoding the input question- passage pairs, there are many ways of setting up a learning objective to encourage the model to predict the correct span. In Table 2b, we provide comparisons of some alternatives (learned end-to- end) given only the passage-level BiLSTM from RASOR. In order to provide clean comparisons, we restrict the alternatives to objectives that are trained and evaluated with exact decoding. The simplest alternative is to consider this task as binary classiï¬ cation for every word (Membership prediction in Table 2b). In this baseline, we optimize the logistic loss for binary labels indicating whether passage words belong to the correct answer span. At prediction time, a valid span can be recovered in linear time by ï¬ nding the maximum contiguous sum of scores. Li et al. (2016) proposed a sequence-labeling scheme that is similar to the above baseline (BIO sequence prediction in Table 2b). We follow their proposed model and learn a conditional random ï¬ eld (CRF) layer after the passage-level BiLSTM to model transitions between the different labels. At prediction time, a valid span can be recovered in linear time using Viterbi decoding, with hard transition constraints to enforce a single contiguous output. We also consider a model that independently predicts the two endpoints of the answer span (End- points prediction in Table 2b). This model uses the softmax loss over passage words during learning.
1611.01436#17
1611.01436#19
1611.01436
[ "1608.07905" ]
1611.01436#19
Learning Recurrent Span Representations for Extractive Question Answering
When decoding, we only need to enforce the constraint that the start index is no greater than the end index. Without the interactions between the endpoints, this can be computed in linear time. Note that this model has the same expressivity as RASOR if the span-level FFNN were removed. Lastly, we compare with a model using the same architecture as RASOR but is trained with a binary logistic loss rather than a softmax loss over spans (Span prediction w/ logistic loss in Table 2b). The trend in Table 2b shows that the model is better at leveraging the supervision as the learning objective more accurately reï¬ ects the fundamental task at hand: determining the best answer span. First, we observe general improvements when using labels that closely align with the task. For example, the labels for membership prediction simply happens to provide single contiguous spans in the supervision. The model must consider far more possible answers than it needs to (the power set of all words). The same problem holds for BIO sequence predictionâ the model must do additional work to learn the semantics of the BIO tags. On the other hand, in RASOR, the semantics of an answer span is naturally encoded by the set of labels. Second, we observe the importance of allowing interactions between the endpoints using the span- level FFNN. RASOR outperforms the endpoint prediction model by 1.1 in exact match, The interac- tion between endpoints enables RASOR to enforce consistency across its two substructures. While this does not provide improvements for predicting the correct region of the answer (captured by the F1 metric, which drops by 0.2), it is more likely to predict a clean answer span that matches human judgment exactly (captured by the exact-match metric).
1611.01436#18
1611.01436#20
1611.01436
[ "1608.07905" ]
1611.01436#20
Learning Recurrent Span Representations for Extractive Question Answering
7 # 6 ANALYSIS Figure 2 shows how the performances of RASOR and the endpoint predictor introduced in Sec- tion 5.2 degrade as the lengths of their predictions increase. It is clear that explicitly modeling interactions between end markers is increasingly important as the span grows in length. 0.8 6 ; 2 £ 506 a 2 a F 5 o g 3 ; id é Y < > % 2 3 % 3 > Z g B04Y 7% ak 8 3 % \ 2 g Z vy Tt ReSoR Fa \ 5 3 Zz a \ 2 $077 ame \) F 2 ZG aso Ni Z Z â â Endpoint EM Z 02 ZY Y G44, y0% ZGGR, Zz ZEGGGBo 123 4 5 67 8 >8 Answer Le! 3 ath Which = 8 == 8 a people brought i | forward : one 7 Me n the | | = earliest a examples of a | Gvil Disobedience a = What ll m= 5 = ff does i cvil disobedience protest San 7 : ? = * gues 2 >roogyyzesgey z esos igo gseegsâ Bae 5 s eSgiessâ s 55 , : - 5 s Figure 2:
1611.01436#19
1611.01436#21
1611.01436
[ "1608.07905" ]
1611.01436#21
Learning Recurrent Span Representations for Extractive Question Answering
F1 and Exact Match (EM) accuracy of RASOR and the endpoint predictor baseline over different prediction lengths. Figure 3: Attention masks from RASOR. Top predictions for the ï¬ rst example are â Egyptiansâ , â Egyptians against the Britishâ , â Britishâ . Top predictions for the second are â unjust lawsâ , â what they deem to be unjust lawsâ , â lawsâ . Figure 3 shows attention masks for both of RASORâ s question representations. The passage- independent question representation pays most attention to the words that could attach to the answer in the passage (â
1611.01436#20
1611.01436#22
1611.01436
[ "1608.07905" ]
1611.01436#22
Learning Recurrent Span Representations for Extractive Question Answering
broughtâ , â againstâ ) or describe the answer category (â peopleâ ). Meanwhile, the passage-aligned question representation pays attention to similar words. The top predictions for both examples are all valid syntactic constituents, and they all have the correct semantic category. How- ever, RASOR assigns almost as much probability mass to itâ s incorrect third prediction â Britishâ as it does to the top scoring correct prediction â Egyptianâ . This showcases a common failure case for RASOR, where it can ï¬ nd an answer of the correct type close to a phrase that overlaps with the question â but it cannot accurately represent the semantic dependency on that phrase. # 7 CONCLUSION We have shown a novel approach for perform extractive question answering on the SQUAD dataset by explicitly representing and scoring answer span candidates. The core of our model relies on a recurrent network that enables shared computation for the shared substructure across span candi- dates. We explore different methods of encoding the passage and question, showing the beneï¬ ts of including both passage-independent and passage-aligned question representations. While we show that this encoding method is beneï¬ cial for the task, this is orthogonal to the core contribution of efï¬ ciently computing span representation. In future work, we plan to explore alternate architectures that provide input to the recurrent span representations. # REFERENCES
1611.01436#21
1611.01436#23
1611.01436
[ "1608.07905" ]
1611.01436#23
Learning Recurrent Span Representations for Extractive Question Answering
Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of ACL, 2016. Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. Proceedings of NIPS, 2016.
1611.01436#22
1611.01436#24
1611.01436
[ "1608.07905" ]
1611.01436#24
Learning Recurrent Span Representations for Extractive Question Answering
8 Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R. Steunebrink, and J¨urgen Schmidhuber. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, PP:1â 11, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom.
1611.01436#23
1611.01436#25
1611.01436
[ "1608.07905" ]
1611.01436#25
Learning Recurrent Span Representations for Extractive Question Answering
Teaching machines to read and comprehend. In Proceedings of NIPS, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâ s books with explicit memory representations. In Proceedings of ICLR, 2016. Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-term Memory. Neural computation, 9(8): 1735â 1780, 1997. Diederik Kingma and Jimmy Ba. Adam:
1611.01436#24
1611.01436#26
1611.01436
[ "1608.07905" ]
1611.01436#26
Learning Recurrent Span Representations for Extractive Question Answering
A method for stochastic optimization. Proceedings of ICLR, 2015. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR, abs/1607.06275, 2016. Vinod Nair and Geoffrey E Hinton. Rectiï¬ ed linear units improve restricted boltzmann machines. In Proceedings of ICML, 2010. Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of EMNLP, 2016. Jeffrey Pennington, Richard Socher, and Christopher D Manning.
1611.01436#25
1611.01436#27
1611.01436
[ "1608.07905" ]
1611.01436#27
Learning Recurrent Span Representations for Extractive Question Answering
Glove: Global vectors for word representation. In Proceedings of EMNLP, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100, 000+ questions for machine comprehension of text. In Proceedings of EMNLP, 2016. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, 2013.
1611.01436#26
1611.01436#28
1611.01436
[ "1608.07905" ]
1611.01436#28
Learning Recurrent Span Representations for Extractive Question Answering
Wilson Taylor. Cloze procedure: A new tool for measuring readability. Journalism Quarterly, 30: 415â 433, 1953. Kateryna Tymoshenko, Daniele Bonadiman, and Alessandro Moschitti. Convolutional neural net- works vs. convolution kernels: Feature engineering for answer sentence reranking. In Proceedings of NAACL, 2016. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of NIPS, 2015. Ellen M. Voorhees and Dawn M.
1611.01436#27
1611.01436#29
1611.01436
[ "1608.07905" ]
1611.01436#29
Learning Recurrent Span Representations for Extractive Question Answering
Tice. Building a question answering test collection. In Proceedings of SIGIR, 2000. Bingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answer selection. In Proceedings of ACL, 2016. Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016. Yi Yang, Wen-tau Yih, and Christopher Meek.
1611.01436#28
1611.01436#30
1611.01436
[ "1608.07905" ]
1611.01436#30
Learning Recurrent Span Representations for Extractive Question Answering
Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of EMNLP, 2015. 9
1611.01436#29
1611.01436
[ "1608.07905" ]
1611.01368#0
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
6 1 0 2 v o N 4 ] L C . s c [ 1 v 8 6 3 1 0 . 1 1 6 1 : v i X r a # Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies LSCP1 & IJN2, CNRS, EHESS and ENS, PSL Research University {tal.linzen, emmanuel.dupoux}@ens.fr Yoav Goldberg Computer Science Department Bar Ilan University [email protected] # Abstract
1611.01368#1
1611.01368
[ "1602.08952" ]
1611.01368#1
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
The success of long short-term memory (LSTM) neural networks in language process- ing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have ex- plicit structural representations? We begin ad- dressing this question using number agreement in English subject-verb dependencies. We probe the architectureâ s grammatical compe- tence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accu- racy (less than 1% errors), but errors increased when sequential and structural information con- ï¬
1611.01368#0
1611.01368#2
1611.01368
[ "1602.08952" ]
1611.01368#2
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
icted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufï¬ cient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured. (Hochreiter and Schmidhuber, 1997) or gated recur- rent units (GRU) (Cho et al., 2014), has led to sig- niï¬ cant gains in language modeling (Mikolov et al., 2010; Sundermeyer et al., 2012), parsing (Vinyals et al., 2015; Kiperwasser and Goldberg, 2016; Dyer et al., 2016), machine translation (Bahdanau et al., 2015) and other tasks. The effectiveness of RNNs1 is attributed to their ability to capture statistical contingencies that may span an arbitrary number of words. The word France, for example, is more likely to occur somewhere in a sentence that begins with Paris than in a sentence that begins with Penguins. The fact that an arbitrary number of words can intervene between the mutually predictive words implies that they cannot be captured by models with a ï¬ xed window such as n-gram mod- els, but can in principle be captured by RNNs, which do not have an architecturally ï¬ xed limit on depen- dency length. RNNs are sequence models: they do not explicitly incorporate syntactic structure. Indeed, many word co-occurrence statistics can be captured by treating the sentence as an unstructured list of words (Paris- France); it is therefore unsurprising that RNNs can learn them well. Other dependencies, however, are sensitive to the syntactic structure of the sentence (Chomsky, 1965; Everaert et al., 2015). To what extent can RNNs learn to model such phenomena based only on sequential cues?
1611.01368#1
1611.01368#3
1611.01368
[ "1602.08952" ]
1611.01368#3
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
# Introduction Recurrent neural networks (RNNs) are highly effec- tive models of sequential data (Elman, 1990). The rapid adoption of RNNs in NLP systems in recent years, in particular of RNNs with gating mecha- nisms such as long short-term memory (LSTM) units Previous research has shown that RNNs (in particu- lar LSTMs) can learn artiï¬ cial context-free languages (Gers and Schmidhuber, 2001) as well as nesting and 1In this work we use the term RNN to refer to the entire class of sequential recurrent neural networks. Instances of the class include long short-term memory networks (LSTM) and the Simple Recurrent Network (SRN) due to Elman (1990). indentation in a programming language (Karpathy et al., 2016). The goal of the present work is to probe their ability to learn natural language hierarchical (syntactic) structures from a corpus without syntactic annotations.
1611.01368#2
1611.01368#4
1611.01368
[ "1602.08952" ]
1611.01368#4
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
As a ï¬ rst step, we focus on a particular dependency that is commonly regarded as evidence for hierarchical structure in human language: English subject-verb agreement, the phenomenon in which the form of a verb depends on whether the subject is singular or plural (the kids play but the kid plays; see additional details in Section 2). If an RNN-based model succeeded in learning this dependency, that would indicate that it can learn to approximate or even faithfully implement syntactic structure. Our main interest is in whether LSTMs have the capacity to learn structural dependencies from a nat- ural corpus. We therefore begin by addressing this question under the most favorable conditions: train- ing with explicit supervision. In the setting with the strongest supervision, which we refer to as the num- ber prediction task, we train it directly on the task of guessing the number of a verb based on the words that preceded it (Sections 3 and 4). We further experiment with a grammaticality judgment training objective, in which we provide the model with full sentences an- notated as to whether or not they violate subject-verb number agreement, without an indication of the locus of the violation (Section 5). Finally, we trained the model without any grammatical supervision, using a language modeling objective (predicting the next word). Our quantitative results (Section 4) and qualitative analysis (Section 7) indicate that most naturally oc- curring agreement cases in the Wikipedia corpus are easy: they can be resolved without syntactic informa- tion, based only on the sequence of nouns preceding the verb. This leads to high overall accuracy in all models. Most of our experiments focus on the super- vised number prediction model. The accuracy of this model was lower on harder cases, which require the model to encode or approximate structural informa- tion; nevertheless, it succeeded in recovering the ma- jority of agreement cases even when four nouns of the opposite number intervened between the subject and the verb (17% errors). Baseline models failed spec- tacularly on these hard cases, performing far below chance levels. Fine-grained analysis revealed that mistakes are much more common when no overt cues to syntactic structure (in particular function words) are available, as is the case in noun-noun compounds and reduced relative clauses. This indicates that the number prediction model indeed managed to capture a decent amount of syntactic knowledge, but was overly reliant on function words.
1611.01368#3
1611.01368#5
1611.01368
[ "1602.08952" ]
1611.01368#5
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Error rates increased only mildly when we switched to more indirect supervision consisting only of sentence-level grammaticality annotations without an indication of the crucial verb. By contrast, the language model trained without explicit grammati- cal supervision performed worse than chance on the harder agreement prediction cases. Even a state-of- the-art large-scale language model (Jozefowicz et al., 2016) was highly sensitive to recent but struc- turally irrelevant nouns, making more than ï¬ ve times as many mistakes as the number prediction model on these harder cases. These results suggest that explicit supervision is necessary for learning the agreement dependency using this architecture, limiting its plau- sibility as a model of child language acquisition (El- man, 1990). From a more applied perspective, this result suggests that for tasks in which it is desirable to capture syntactic dependencies (e.g., machine trans- lation or language generation), language modeling objectives should be supplemented by supervision signals that directly capture the desired behavior.
1611.01368#4
1611.01368#6
1611.01368
[ "1602.08952" ]
1611.01368#6
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
# 2 Background: Subject-Verb Agreement as Evidence for Syntactic Structure The form of an English third-person present tense verb depends on whether the head of the syntactic subject is plural or singular:2 (1) The key is on the table. a. b. *The key are on the table. c. *The keys is on the table. d. While in these examples the subjectâ s head is adjacent to the verb, in general the two can be separated by some sentential material:3 2 Identifying the head of the subject is typically straightfor- ward. In what follows we will use the shorthand â the subjectâ to refer to the head of the subject. 3In the examples, the subject and the corresponding verb are marked in boldface, agreement attractors are underlined and intervening nouns of the same number as the subject are marked in italics. Asterisks mark unacceptable sentences. # (2) The keys to the cabinet are on the table. Given a syntactic parse of the sentence and a verb, it is straightforward to identify the head of the subject that corresponds to that verb, and use that information to determine the number of the verb (Figure 1). root nsubj det prep pobj det prep pobj det The keys to the cabinet are on the table Figure 1: The form of the verb is determined by the head of the subject, which is directly connected to it via an nsubj edge. Other nouns that intervene between the head of the subject and the verb (here cabinet is such a noun) are irrelevant for determining the form of the verb and need to be ignored. By contrast, models that are insensitive to structure may run into substantial difï¬ culties capturing this de- pendency. One potential issue is that there is no limit to the complexity of the subject NP, and any number of sentence-level modiï¬ ers and parentheticalsâ and therefore an arbitrary number of wordsâ can appear between the subject and the verb: The building on the far right thatâ s quite old and run down is the Kilgore Bank Building.
1611.01368#5
1611.01368#7
1611.01368
[ "1602.08952" ]
1611.01368#7
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
This property of the dependency entails that it can- not be captured by an n-gram model with a ï¬ xed n. RNNs are in principle able to capture dependencies of an unbounded length; however, it is an empirical question whether or not they will learn to do so in practice when trained on a natural corpus. A more fundamental challenge that the depen- dency poses for structure-insensitive models is the possibility of agreement attraction errors (Bock and Miller, 1991). The correct form in (3) could be se- lected using simple heuristics such as â agree with the most recent nounâ , which are readily available to sequence models. In general, however, such heuris- tics are unreliable, since other nouns can intervene between the subject and the verb in the linear se- quence of the sentence. Those intervening nouns can have the same number as the subject, as in (4), or the opposite number as in (5)-(7):
1611.01368#6
1611.01368#8
1611.01368
[ "1602.08952" ]
1611.01368#8
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Alluvial soils carried in the ï¬ oodwaters add nutrients to the ï¬ oodplains. (5) The only championship banners that are cur- rently displayed within the building are for national or NCAA Championships. The length of the forewings is 12-13. Yet the ratio of men who survive to the women and children who survive is not clear in this story. Intervening nouns with the opposite number from the subject are called agreement attractors. The potential presence of agreement attractor entails that the model must identify the head of the syntactic subject that corresponds to a given verb in order to choose the correct inï¬ ected form of that verb. Given the difï¬ culty in identifying the subject from the linear sequence of the sentence, dependencies such as subject-verb agreement serve as an argument for structured syntactic representations in humans (Everaert et al., 2015); they may challenge models such as RNNs that do not have pre-wired syntac- tic representations. We note that subject-verb num- ber agreement is only one of a number of structure- sensitive dependencies; other examples include nega- tive polarity items (e.g., any) and reï¬ exive pronouns (herself ). Nonetheless, a modelâ s success in learning subject-verb agreement would be highly suggestive of its ability to master hierarchical structure. # 3 The Number Prediction Task To what extent can a sequence model learn to be sensi- tive to the hierarchical structure of natural language? To study this question, we propose the number pre- diction task. In this task, the model sees the sentence up to but not including a present-tense verb, e.g.:
1611.01368#7
1611.01368#9
1611.01368
[ "1602.08952" ]
1611.01368#9
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
(8) The keys to the cabinet It then needs to guess the number of the following verb (a binary choice, either PLURAL or SINGULAR). We examine variations on this task in Section 5. In order to perform well on this task, the model needs to encode the concepts of syntactic number and syntactic subjecthood: it needs to learn that some words are singular and others are plural, and to be able to identify the correct subject. As we have illus- trated in Section 2, correctly identifying the subject that corresponds to a particular verb often requires sensitivity to hierarchical syntax. Data: An appealing property of the number predic- tion task is that we can generate practically unlimited training and testing examples for this task by query- ing a corpus for sentences with present-tense verbs, and noting the number of the verb. Importantly, we do not need to correctly identify the subject in order to create a training or test example. We generated a corpus of â ¼1.35 million number prediction problems based on Wikipedia, of which â ¼121,500 (9%) were used for training, â ¼13,500 (1%) for validation, and the remaining â ¼1.21 million (90%) were reserved for testing.4 The large number of test sentences was necessary to ensure that we had a good variety of test sentences representing less common constructions (see Section 4).5 Model and baselines: We encode words as one- hot vectors: the model does not have access to the characters that make up the word. Those vectors are then embedded into a 50-dimensional vector space. An LSTM with 50 hidden units reads those embed- ding vectors in sequence; the state of the LSTM at the end of the sequence is then fed into a logistic regression classiï¬
1611.01368#8
1611.01368#10
1611.01368
[ "1602.08952" ]
1611.01368#10
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
er. The network is trained6 in an end-to-end fashion, including the word embeddings.7 To isolate the effect of syntactic structure, we also consider a baseline which is exposed only to the nouns in the sentence, in the order in which they appeared originally, and is then asked to predict the number of the following verb. The goal of this base- 4We limited our search to sentences that were shorter than 50 words. Whenever a sentence had more than one subject-verb dependency, we selected one of the dependencies at random. 5Code and data are available at http://tallinzen. net/projects/lstm_agreement. 6The network was optimized using Adam (Kingma and Ba, 2015) and early stopping based on validation set error. We trained the number prediction model 20 times with different random initializations, and report accuracy averaged across all runs. The models described in Sections 5 and 6 are based on 10 runs, with the exception of the language model, which is slower to train and was trained once. 7The size of the vocabulary was capped at 10000 (after low- ercasing). Infrequent words were replaced with their part of speech (Penn Treebank tagset, which explicitly encodes number distinctions); this was the case for 9.6% of all tokens and 7.1% of the subjects. line is to withhold the syntactic information carried by function words, verbs and other parts of speech. We explore two variations on this baseline: one that only receives common nouns (dogs, pipe), and an- other that also receives pronouns (he) and proper nouns (France). We refer to these as the noun-only baselines. # 4 Number Prediction Results Overall accuracy: Accuracy was very high over- all: the system made an incorrect number prediction only in 0.83% of the dependencies. The noun-only baselines performed signiï¬ cantly worse: 4.2% errors for the common-nouns case and 4.5% errors for the all-nouns case. This suggests that function words, verbs and other syntactically informative elements play an important role in the modelâ s ability to cor- rectly predict the verbâ
1611.01368#9
1611.01368#11
1611.01368
[ "1602.08952" ]
1611.01368#11
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
s number. However, while the noun-only baselines made more than four times as many mistakes as the number prediction system, their still-low absolute error rate indicates that around 95% of agreement dependencies can be captured based solely on the sequence of nouns preceding the verb. This is perhaps unsurprising: sentences are often short and the verb is often directly adjacent to the sub- ject, making the identiï¬ cation of the subject simple. To gain deeper insight into the syntactic capabilities of the model, then, the rest of this section investigates its performance on more challenging dependencies.8
1611.01368#10
1611.01368#12
1611.01368
[ "1602.08952" ]
1611.01368#12
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Distance: We ï¬ rst examine whether the network shows evidence of generalizing to dependencies where the subject and the verb are far apart. We focus in this analysis on simpler cases where no nouns in- tervened between the subject and the verb. As Figure 2a shows, performance did not degrade considerably when the distance between the subject and the verb grew up to 15 words (there were very few longer dependencies). This indicates that the network gen- eralized the dependency from the common distances of 0 and 1 to rare distances of 10 and more. Agreement attractors: We next examine how the modelâ s error rate was affected by nouns that inter- vened between the subject and the verb in the linear 8These properties of the dependencies were identiï¬ ed by parsing the test sentences using the parser described in Goldberg and Nivre (2012). (a) (b) (c) (d) (e) (f) Figure 2: (a-d) Error rates of the LSTM number prediction model as a function of: (a) distance between the subject and the verb, in dependencies that have no intervening nouns; (b) presence and number of last intervening noun; (c) count of attractors in dependencies with homogeneous intervention; (d) presence of a relative clause with and without an overt relativizer in dependencies with homogeneous intervention and exactly one attractor. All error bars represent 95% binomial conï¬ dence intervals. (e-f) Additional plots: (e) count of attractors per dependency in the corpus (note that the y-axis is on a log scale); (f) embeddings of singular and plural nouns, projected onto their ï¬ rst two principal components. order of the sentence. We ï¬ rst focus on whether or not there were any intervening nouns, and if there were, whether the number of the subject differed from the number of the last intervening nounâ the type of noun that would trip up the simple heuristic of agreeing with the most recent noun. As Figure 2b shows, a last intervening noun of the same number as the subject increased error rates only moderately, from 0.4% to 0.7% in singular subjects and from 1% to 1.4% in plural subjects.
1611.01368#11
1611.01368#13
1611.01368
[ "1602.08952" ]
1611.01368#13
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
On the other hand, when the last intervening noun was an agree- ment attractor, error rates increased by almost an order of magnitude (to 6.5% and 5.4% respectively). Note, however, that even an error rate of 6.5% is quite impressive considering uninformed strategies such as random guessing (50% error rate), always assigning the more common class label (32% error rate, since 32% of the subjects in our corpus are plu- ral) and the number-of-most-recent-noun heuristic (100% error rate). The noun-only LSTM baselines performed much worse in agreement attraction cases, with error rates of 46.4% (common nouns) and 40% (all nouns). We next tested whether the effect of attractors is cumulative, by focusing on dependencies with multi- ple attractors. To avoid cases in which the effect of an attractor is offset by an intervening noun with the same number as the subject, we restricted our search to dependencies in which all of the intervening nouns had the same number, which we term dependencies with homogeneous intervention. For example, (9) has homogeneous intervention whereas (10) does not:
1611.01368#12
1611.01368#14
1611.01368
[ "1602.08952" ]
1611.01368#14
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
The roses in the vase by the door are red. The roses in the vase by the chairs are red. Figure 2c shows that error rates increased gradually as more attractors intervened between the subject and the verb. Performance degraded quite slowly, how- ever: even with four attractors the error rate was only 17.6%. As expected, the noun-only baselines per- formed signiï¬ cantly worse in this setting, reaching an error rate of up to 84% (worse than chance) in the case of four attractors. This conï¬ rms that syntactic cues are critical for solving the harder cases. Relative clauses: We now look in greater detail into the networkâ s performance when the words that intervened between the subject and verb contained a relative clause. Relative clauses with attractors are likely to be fairly challenging, for several rea- sons. They typically contain a verb that agrees with the attractor, reinforcing the misleading cue to noun number. The attractor is often itself a subject of an irrelevant verb, making a potential â agree with the most recent subjectâ strategy unreliable. Finally, the existence of a relative clause is sometimes not overtly indicated by a function word (relativizer), as in (11) (for comparison, see the minimally different (12)): The landmarks this article lists here are also run-of-the-mill and not notable. The landmarks that this article lists here are also run-of-the-mill and not notable. For data sparsity reasons we restricted our attention to dependencies with a single attractor and no other intervening nouns. As Figure 2d shows, attraction errors were more frequent in dependencies with an overt relative clause (9.9% errors) than in dependen- cies without a relative clause (3.2%), and consider- ably more frequent when the relative clause was not introduced by an overt relativizer (25%). As in the case of multiple attractors, however, while the model struggled with the more difï¬ cult dependencies, its performance was much better than random guessing, and slightly better than a majority-class strategy. Word representations: We explored the 50- dimensional word representations acquired by the model by performing a principal component anal- ysis. We assigned a part-of-speech (POS) to each word based on the wordâ s most common POS in the corpus.
1611.01368#13
1611.01368#15
1611.01368
[ "1602.08952" ]