doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1704.04651
46
# 6.1 HYPERPARAMETER OPTIMIZATION As we believe that algorithms should be robust with respect to the choice of hyperparameters, we spent little effort on parameter optimization. In total, we explored three distinct values of learning rates and two values of ADAM momentum (the default and zero) and two values of Tupdate on a subset of 7 Atari games without prioritization using non-distributional version of Reactor. We later used those values for all experiments. We did not optimize for batch sizes and sequence length or any prioritization hyperparamters. 6.2 RANK AND ELO EVALUATION
1704.04651#46
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
47
Commonly used mean and median human normalized scores have several disadvantages. A mean human normalized score implicitly puts more weight on games that computers are good and humans are bad at. Comparing algorithm by a mean human normalized score across 57 Atari games is almost equivalent to comparing algorithms on a small subset of games close to the median and thus dominating the signal. Typically a set of ten most score-generous games, namely Assault, Asterix, Breakout, Demon Attack, Double Dunk, Gopher, Pheonix, Stargunner, Up’n Down and Video Pinball can explain more than half of inter-algorithm variance. A median human normalized score has the opposite disadvantage by effectively discarding very easy and very hard games from the comparison. As typical median human normalized scores are within the range of 1-2.5, an algorithm which scores zero points on Montezuma’s Revenge is evaluated equal to the one which scores 2500 points, as both performance levels are still below human performance making incremental improvements on hard games not being reflected in the overall evaluation. In order to address both problem, we also evaluated mean rank and Elo metrics for inter-algorithm comparison. Those metrics implicitly assign the same weight to each game, and as a result is more sensitive of relative performance on very hard and easy games: swapping scores of two algorithms on any game would result in the change of both mean rank and Elo metrics.
1704.04651#47
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
48
We calculated separate mean rank and Elo scores for each algorithm using results of test evaluations with 30 random noop-starts and 30 random human starts (Tables 5 and 4). All algorithms were ranked across each game separately, and a mean rank was evaluated across 57 Atari games. For Elo score evaluation algorithm, A was considered to win over algorithm B if it obtained more scores on a given Atari. We produced an empirical win-probability matrix by summing wins across all games and used this matrix to evaluate Elo scores. A ranking difference of 400 corresponds to the odds of winning of 10:1 under the Gaussian assumption. 6.3 CONTEXTUAL PRIORITY TREE Contextual priority tree is one possible implementation of lazy prioritization (Figure 4). All sequence keys are put into a balanced binary search tree which maintains a temporal order. An AVL tree (Velskii & Landis (1976)) was chosen due to the ease of implementation and because it is on average more evenly balanced than a Red-Black Tree. Each tree node has up to two children (left and right) and contains currently stored key and a priority of the key which is either set or is unknown. Some trees may only have a single child subtree while 13 Published as a conference paper at ICLR 2018
1704.04651#48
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
49
13 Published as a conference paper at ICLR 2018 I, L, I, lp li3 I, |e |e @ | oe eo | J 5 No set priority [Priority to be estimated 8 Has a set priority ll, | Contains exactly one [ij rH Contains [J and at least one lI, | 5 Figure 4: Illustration of Lazy prioritization, where sequences with no explicitly assigned priorities get priorities estimated by a linear combination of nearby assigned priorities. Exact boundaries of blue and red intervals are arbitrary (as long as all conditions described in Section 3.3 are satisfied) thus leading to many possible algorithms. Each square represents an individual sequence of size 32 (sequences overlap). Inverse sizes of blue regions work as local density estimates allowing to produce unbiased priority estimates.
1704.04651#49
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
51
Figure 5: Rules used to evaluate summary statistics on each node of a binary search tree where all sequence keys are kept sorted by temporal order. cl and cr are total number of nodes within left and right subtrees. ml and ml are estimated mean priorities per node within the subtree. A central square node corresponds to a single key stored within the parent node with its corresponding priority of p (if set) or ? if not set. Red subtrees do not have any singe child with a set priority, and a result do not have priority estimates. A red square shows that priority of the key stored within the parent node is not known. Unknown mean priorities is marked by a question mark. Empty child nodes simply behave as if c = 0 with p =?. Rules a-f illustrate how mean values are propagated down from children to parents when priorities are only partially known (rules d and e also apply symmetrically). Sampling is done by going from the root node up the tree by selecting one of the children (or the current key) stochastically proportional to orange proportions. Sampling terminates once the current (square) key is chosen. 14 Published as a conference paper at ICLR 2018 \/ /
1704.04651#51
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
52
14 Published as a conference paper at ICLR 2018 \/ / Figure 6: Example of a balanced priority tree. Dark blue nodes contain keys with known priorities, light blue nodes have at least one child with at least a single known priority, while ping nodes do not have any priority estimates. Nodes 1, 2 and 3 will obtain priority estimates equal to 2/3 of the priority of key 5 and 1/3 of the priority of node 4. This implies that estimated priorities of keys 1, 2 and 3 are implicitly defined by keys 4 and 6. Nodes 8, 9 and 11 are estimated to have the same priority as node 10. some may have none. In addition to this information, we were tracking other summary statistics at each node which was re-evaluated after each tree rotation. The summary statistics was evaluated by consuming previously evaluated summary statistics of both children and a priority of the key stored within the current node. In particular, we were tracking a total number of nodes within each subtree and mean-priority estimates updated according to rules shown in Figure 5. The total number of nodes within each subtree was always known (c in Figure 5), while mean priority estimates per key (m in Figure 5) could either be known or unknown.
1704.04651#52
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
53
If a mean priority of either one child subtree or a key stored within the current node is unknown then it can be estimated to by exploiting information coming from another sibling subtree or a priority stored within the parent node. Sampling was done by traversing the tree from the root node up while sampling either one of the children subtrees or the currently held key proportionally to the total estimated priority masses contained within. The rules used to evaluate proportions are shown in orange in Figure 5. Similarly, probabilities of arbitrary keys can be queried by traversing the tree from the root node towards the child node of an interest while maintaining a product of probabilities at each branching point. Insertion, deletion, sampling and probability query operations can be done in O(ln(n)) time. The suggested algorithm has the desired property that it becomes a simple proportional sampling algorithm once all the priorities are known. While some key priorities are unknown, they are estimated by using nearby known key priorities (Figure 6).
1704.04651#53
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
54
Each time when a new sequence key is added to the tree, it was set to have an unknown priority. Any priority was assigned only after the key got first sampled and the corresponding sequence got passed through the learner. When a priority of a key is set or updated, the key node is deliberately removed from and placed back to the tree in order to become a leaf-node. This helped to set priorities of nodes in the immediate vicinity more accurately by using the freshest information available. 6.4 NETWORK ARCHITECTURE The value of € = 0.01 is the minimum probability of choosing a random action and it is hard-coded into the policy network. Figure[7]shows the overall network topology while Table[3]specifies network layer sizes. 15 Published as a conference paper at ICLR 2018 Action value estimate Q(x, a) _— V(x) —~__ ESE” LSTM A(x, a) Current policy T1(X, a) r LSTM A Linear 4 Convnet Figure 7: Network architecture. Table 3: Specification of the neural network used (illustrated in Figure 7)
1704.04651#54
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
55
Figure 7: Network architecture. Table 3: Specification of the neural network used (illustrated in Figure 7) SIZE CONVOLUTIONAL KERNEL OUTPUT | STRIDES WIDTH | CHANNELS Conv I [84, 84, 1] [8, 8] 16 4 CONCATRELU [20, 20, 16] Conv 2 (20, 20, 32] (4, 4] 32 2 CONCATRELU [9, 9, 32] Conv 3 [9, 9, 64] (3, 3] 32 1 CONCATRELU [7, 7, 32] FULLY CONNECTED OUTPUT SIZE LINEAR [7, 7, 64] 128 CONCATRELU [128] RECURRENT OUTPUT SIZE Tv LSTM [256] 128 LINEAR [128] 32 CONCATRELU [32] LINEAR [64] #ACTIONS SOFTMAX [#ACTIONS] #ACTIONS X(1-€)+€/#ACTIONS [#ACTIONS] #ACTIONS RECURRENT Q OUTPUT SIZE LSTM [256] 128 VALUE LOGIT HEAD OUTPUT SIZE LINEAR [128] 32 CONCATRELU [32] LINEAR [64] #BINS ADVANTAGE LOGIT HEAD #ACTIONS X #BINS LINEAR [128] 32 CONCATRELU [32] 16
1704.04651#55
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
56
16 Published as a conference paper at ICLR 2018 6.5 COMPARISONS WITH RAINBOW In this section we compare Reactor with the recently published Rainbow agent (Hessel et al., 2017). While ACER is the most closely related algorithmically, Rainbow is most closely related in terms of performance and thus a deeper understanding of the trade-offs between Rainbow and Reactor may benefit interested readers. There are many architectural and algorithmic differences between Rainbow and Reactor. We will therefore begin by highlighting where they agree. Both use a categorical action-value distribution critic (Bellemare et al., 2017), factored into state and state-action logits (Wang et al., 2015), l(a, a) 1 q(x, a) 5, hea)" 1,(a,a) = 1,(x) + 1;(2, a) — iA > 1,(x, b). beA Both use prioritized replay, and finally, both perform n-step Bellman updates.
1704.04651#56
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
57
Both use prioritized replay, and finally, both perform n-step Bellman updates. Despite these similarities, Reactor and Rainbow are fundamentally different algorithms and are based upon different lines of research. While Rainbow uses Q-Learning and is based upon DQN (Mnih et al., 2015), Reactor is an actor-critic algorithm most closely based upon A3C (Mnih et al., 2016). Each inherits some design choices from their predecessors, and we have not performed an extensive ablation comparing these various differences. Instead, we will discuss four of the differences we believe are important but less obvious. First, the network structures are substantially different. Rainbow uses noisy linear layers and ReLU activations throughout the network, whereas Reactor uses standard linear layers and concatenated ReLU activations throughout. To overcome partial observability, Rainbow, inheriting this choice from DQN, uses frame stacking. On the other hand, Reactor, inheriting its choice from A3C, uses LSTMs after the convolutional layers of the network. It is also difficult to directly compare the number of parameters in each network because the use of noisy linear layers doubles the number of parameters, although half of these are used to control noise, while the LSTM units in Reactor require more parameters than a corresponding linear layer would.
1704.04651#57
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
58
Second, both algorithms perform n-step updates, however, the Rainbow n-step update does not use any form of off-policy correction. Because of this, Rainbow is restricted to using only small values of n (e.g. n = 3) because larger values would make sequences more off-policy and hurt performance. By comparison, Reactor uses our proposed distributional Retrace algorithm for off-policy correction of n-step updates. This allows the use of larger values of n (e.g. n = 33) without loss of performance.
1704.04651#58
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
59
Third, while both agents use prioritized replay buffers (Schaul et al.| 2016), they each store different information and prioritize using different algorithms. Rainbow stores a tuple containing the state x,_1, action a;_1, sum of n discounted rewards ann Tek in t-+m, product of n discount factors Tio Vt+k, and next-state n steps away X;4,—1. Tuples are prioritized based upon the last observed TD error, and inserted into replay with a maximum priority. Reactor stores length n sequences of tuples (2,~1, a¢—1, 1+, Ye) and also prioritizes based upon the observed TD error. However, when inserted into the buffer the priority is instead inferred based upon the known priorities of neighboring sequences. This priority inference was made efficient using the previously introduced contextual priority tree, and anecdotally we have seen it improve performance over a simple maximum priority approach.
1704.04651#59
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
60
Finally, the two algorithms have different approaches to exploration. Rainbow, unlike DQN, does not use €-greedy exploration, but instead replaces all linear layers with noisy linear layers which induce randomness throughout the network. This method, called Noisy Networks 2017), creates an adaptive exploration integrated into the agent’s network. Reactor does not use etworks, but instead uses the same entropy cost method used by A3C and many others which penalizes deterministic policies thus encouraging indifference between similarly valued actions. Because Rainbow can essentially learn not to explore, it may learn to become entirely greedy in the early parts of the episode, while still exploring in states not as frequently seen. In some sense, this is precisely what we want from an exploration technique, but it may also lead to highly deterministic trajectories in the early part of the episode and an increase in overfitting to those trajectories. We hypothesize that this may be the explanation for the significant difference in Rainbow’s performance between evaluation under no-op and random human starts, and why Reactor does not show such a large difference. 17 Published as a conference paper at ICLR 2018 # 6.6 ATARI RESULTS
1704.04651#60
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04651
61
17 Published as a conference paper at ICLR 2018 # 6.6 ATARI RESULTS Table 4: Scores for each game evaluated with 30 random human starts. Reactor was evaluated by averaging scores over 200 episodes. All scores (except for Reactor) were taken from Wang et al. (2015), Mnih et al. (2016) and Hessel et al. (2017). Table 5: Scores for each game evaluated with 30 random noop starts. Reactor was evaluated by averaging scores over 200 episodes. All scores (except for Reactor) were taken from Wang et al. (2015) and Hessel et al. (2017). GAME AGENT RANDOM HUMAN DQN DDQN DUEL PRIOR RAINBOW REACTOR 18
1704.04651#61
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the \b{eta}-leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
http://arxiv.org/pdf/1704.04651
Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos
cs.AI
null
null
cs.AI
20170415
20180619
[ { "id": "1707.06347" }, { "id": "1703.01161" }, { "id": "1509.02971" }, { "id": "1710.02298" }, { "id": "1706.10295" }, { "id": "1707.06887" }, { "id": "1511.05952" } ]
1704.04341
0
7 1 0 2 r p A 4 1 ] I A . s c [ 1 v 1 4 3 4 0 . 4 0 7 1 : v i X r a # Environment-Independent Task Specifications via GLTL Michael L. Littman Ufuk Topcu # Jie Fu # Charles Isbell # Min Wen James MacGlashan # Abstract We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by be- ing environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in fi- nite time. We provide several small environments that demonstrate the advantages of our geomet- ric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement- learning tasks straightforwardly. 1 # Introduction reward function as a program, and a policy as an output, then reinforcement learning can be viewed as a process of program interpretation. We would like the same program to work across all possible inputs. # 1.1 Specifying behavior via reward functions
1704.04341#0
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
0
7 1 0 2 r p A 5 2 ] L C . s c [ 2 v 8 6 3 4 0 . 4 0 7 1 : v i X r a # Get To The Point: Summarization with Pointer-Generator Networks # Abigail See Stanford University [email protected] Peter J. Liu Google Brain [email protected] # Christopher D. Manning Stanford University [email protected] # Abstract Neural sequence-to-sequence models have provided a viable new approach for ab- stractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the origi- nal text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we pro- pose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate repro- duction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail sum- marization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
1704.04368#0
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
1
# 1.1 Specifying behavior via reward functions An MDP consists of a finite state space, action space, transition function, and reward function. Given an en- vironment, an agent should behave in a way that maxi- mizes cumulative discounted expected reward. The prob- lems of learning and planning in such environments have been vigorously studied in the AI community for over 25 years (Watkins, 1989; Boutilier et al., 1999; Strehl et al., 2009). A reinforcement-learning (RL) agent needs to learn to maximize cumulative discounted expected reward start- ing with an incomplete model of the MDP itself. The thesis of this work is that (1) rewards are an excel- lent way of controlling the behavior of agents, but (2) re- wards are difficult to use for specifying behaviors in an environment-independent way, therefore (3) we need in- termediate representations between behavior specifications and reward functions.
1704.04341#1
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
1
Original Text (truncated): lagos, nigeria (cnn) a day after winning nige- ria’s presidency, muhammadu buhari told cnn’s christiane amanpour that he plans to aggressively fight corruption that has long plagued nigeria and go after the root of the nation’s unrest. buhari said he’ll “rapidly give attention” to curbing violence in the northeast part of nigeria, where the ter- rorist group boko haram operates. by cooperating with neighboring nations chad, cameroon and niger, he said his administration is confident it will be able to thwart criminals and others contributing to nigeria’s instability. for the first time in nigeria’s history, the opposition defeated the ruling party in democratic elections. buhari defeated incumbent goodluck jonathan by about 2 million votes, according to nigeria’s independent national electoral commission. the win comes after a long history of military rule, coups and botched attempts at democracy in africa’s most populous nation.
1704.04368#1
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
2
The intermediate representation we propose is a novel vari- ant of linear temporal logic that is modified to be proba- bilistic so as to better support reinforcement-learning tasks. Linear temporal logic has been used in the past to specify reward functions that depend on temporal sequences (Bac- chus et al., 1996); here, we expand the role to provide a ro- bust and consistent semantics that allows desired behaviors to be specified in an environment-independent way. Briefly, our approach involves the specification of tasks via tempo- ral operators that have a constant probability of expiring on each step. As such, it bears a close relationship to the notion of discounting in standard Markov decision process (MDP) reward functions (Puterman, 1994). At a philosophical level, we are viewing behavior specifi- cation as a kind of programming problem. That is, if we think of a Markov decision process (MDP) as an input, a
1704.04341#2
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
2
Baseline Seq2Seq + Attention: UNK UNK says his administration is confi- dent it will be able to destabilize nigeria’s economy. UNK says his admin- istration is confident it will be able to thwart criminals and other nigerians. he says the country has long nigeria and nigeria’s economy. Pointer-Gen: muhammadu buhari says he plans to aggressively fight cor- ruption in the northeast part of nigeria. he says he’ll “rapidly give at- tention” to curbing violence in the northeast part of nigeria. he says his administration is confident it will be able to thwart criminals. Pointer-Gen + Coverage: muhammadu buhari says he plans to aggressively fight corruption that has long plagued nigeria. he says his administration is confident it will be able to thwart criminals. the win comes after a long his- tory of military rule, coups and botched attempts at democracy in africa’s most populous nation.
1704.04368#2
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
3
For “programming” reinforcement-learning agents, the state of the art is to define a reward function and then for the learning agent to interact with the environment to dis- cover ways to maximize its reward. Reward-based speci- fications have proven to be extremely valuable for optimal planning in complex, uncertain environments (Russell & Norvig, 1994). However, we can show that reward func- tions, as they are currently structured, are very difficult to work with as a way of reliably specifying tasks. The best use case for reward functions is when the utilities of all ac- tions and outcomes can be expressed in a consistent unit, for example, time or money or energy. In reality, how- ever, putting a meaningful dollar figure on scuffing a wall or dropping a clean fork is challenging. When informally adding negative rewards to undesirable outcomes, it is dif- ficult to ensure a consistent semantics over which planning and reasoning can be carried out. Further, reward values of- ten need to be changed if the environment itself changes— they are not environment independent. Therefore, to get a system to exhibit a desired behavior, it can be necessary to try different reward structures and carry out learning mul- tiple times in the target environment, greatly undermining the purpose of autonomous learning in the first place.
1704.04341#3
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
3
Figure 1: Comparison of output of 3 abstrac- tive summarization models on a news article. The baseline model makes factual errors, a nonsen- sical sentence and struggles with OOV words muhammadu buhari. The pointer-generator model is accurate but repeats itself. Coverage eliminates repetition. The final summary is composed from several fragments. 1 # 1 Introduction Summarization is the task of condensing a piece of text to a shorter version that contains the main in- formation from the original. There are two broad approaches to summarization: extractive and ab- stractive. Extractive methods assemble summaries exclusively from passages (usually whole sen- tences) taken directly from the source text, while abstractive methods may generate novel words and phrases not featured in the source text – as a human-written abstract usually does. The ex- tractive approach is easier, because copying large chunks of text from the source document ensures baseline levels of grammaticality and accuracy. On the other hand, sophisticated abilities that are crucial to high-quality summarization, such as paraphrasing, generalization, or the incorporation of real-world knowledge, are possible only in an abstractive framework (see Figure 5).
1704.04368#3
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
4
Figure 1: Action a2 has probability 1 − p of transitioning to a non-b state and a probability of p of entering a self-loop in a b state. Action a1 passes through a b state and then over to the goal. Consider the simple example MDP in Figure 1. The agent is choosing between a1 and a2 in the initial state s0. Choos- ing a1 causes the agent to pass through bad state b1 for one step, then to continue on to the goal g. Action a2, however, results in a probabilistic transition to s1 (with probability 1 − p) or bad state b2 (with slip probability p). From s1, the agent can continue on to the goal. If it reaches b2, it gets stuck there forever. Let’s say our desired behavior is “maximize the probability of reaching g without hitting a bad state”. (A bad state could be something like colliding with a wall or bumping up against a table.) The probability of success of a1 is zero and a2 is 1 − p. Thus, for any 0 ≤ p < 1, it is better to take action a2.
1704.04341#4
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
4
Due to the difficulty of abstractive summariza- tion, the great majority of past work has been ex- tractive (Kupiec et al., 1995; Paice, 1990; Sag- gion and Poibeau, 2013). However, the recent suc- cess of sequence-to-sequence models (Sutskever Context Vector "beat" uonnqiasiq Asejnqeson, Attention Distribution | Encoder Hidden States Ss + ry ry Germany emerge victorious in 20 win against Argentina on ry 7 { saveis UapplH Japooeq Saturday. <START> Germany \y Source Text —_,—Y’ Partial Summary Figure 2: Baseline sequence-to-sequence model with attention. The model may attend to relevant words in the source text to generate novel words, e.g., to produce the novel word beat in the abstractive summary Germany beat Argentina 2-0 the model may attend to the words victorious and win in the source text.
1704.04368#4
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
5
What reward function encourages this behavior? For con- creteness, let’s assume a discount of γ = 0.8 and a reward of +1 for reaching the goal. We can assign bad states a value of −r. In the case where p = 0.1, setting r > 0.16 encourages the desired behavior. Consider, though, what happens if the slip probability is p = 0.3. Now, there is no value of r for which a2 is pre- 1. That is, it has become impossible to find a ferred to a1 reward function that creates the correct incentives for the desired behavior to be optimal. This example is perhaps a bit contrived, but we have ob- served the same phenomenon in large and natural state spaces as well. The reason for this result is that reward functions force us to express utility in terms of the dis- counted expected visit frequency of states. In this case, we are stuck trying to make a tradeoff between the cer- tainty of encountering a bad state once and the possibility of encountering a bad state repeatedly. Since we are try- ing to maximize the probability of zero encounters with a bad state, the expected number of encounters is only useful for distinguishing zero from more than zero—the objective cannot be translated into a reward function when bad states
1704.04341#5
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
5
et al., 2014), in which recurrent neural networks (RNNs) both read and freely generate text, has made abstractive summarization viable (Chopra et al., 2016; Nallapati et al., 2016; Rush et al., 2015; Zeng et al., 2016). Though these systems are promising, they exhibit undesirable behavior such as inaccurately reproducing factual details, an inability to deal with out-of-vocabulary (OOV) words, and repeating themselves (see Figure 1). that were applied to short-text summarization. We propose a novel variant of the coverage vector (Tu et al., 2016) from Neural Machine Translation, which we use to track and control coverage of the source document. We show that coverage is re- markably effective for eliminating repetition. # 2 Our Models
1704.04368#5
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
6
# 2 Our Models In this paper we present an architecture that addresses these three issues in the context of multi-sentence summaries. While most recent ab- stractive work has focused on headline genera- tion tasks (reducing one or two sentences to a single headline), we believe that longer-text sum- marization is both more challenging (requiring higher levels of abstraction while avoiding repe- tition) and ultimately more useful. Therefore we apply our model to the recently-introduced CNN/ Daily Mail dataset (Hermann et al., 2015; Nallap- ati et al., 2016), which contains news articles (39 sentences on average) paired with multi-sentence summaries, and show that we outperform the state- of-the-art abstractive system by at least 2 ROUGE points. Our hybrid pointer-generator network facili- tates copying words from the source text via point- ing (Vinyals et al., 2015), which improves accu- racy and handling of OOV words, while retaining the ability to generate new words. The network, which can be viewed as a balance between extrac- tive and abstractive approaches, is similar to Gu et al.’s (2016) CopyNet and Miao and Blunsom’s (2016) Forced-Attention Sentence Compression,
1704.04368#6
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
7
Linear temporal logic formulas are built up from a set of atomic propositions; the logic connectives: negation (-—), disjunction (V), conjunction (A) and material implication (—); and the temporal modal operators: next (©), al- ways (CZ), eventually (<>) and until (/). A wide class of properties including safety (0-6), goal guarantee (<q), progress (L<)g), response (A(b —> <g)), and stability (Ug), where b and g are atomic propositions, can be ex- pressed as LTL formulas. More complicated specifications can be obtained from the composition of such simple for- mulas. For example, the specification of “repeatedly visit certain locations of interest in a given order while avoid- ing certain other unsafe or undesirable locations” can be obtained through proper composition of simpler safety and progress formulas (Manna & Pnuelil 1992} Baier & Ka- {toen| 2008).
1704.04341#7
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
7
In this section we describe (1) our baseline (2) our pointer- sequence-to-sequence model, generator model, and (3) our coverage mechanism that can be added to either of the first two models. The code for our models is available online.1 # 2.1 Sequence-to-sequence attentional model Our baseline model is similar to that of Nallapati et al. (2016), and is depicted in Figure 2. The to- kens of the article wi are fed one-by-one into the encoder (a single-layer bidirectional LSTM), pro- ducing a sequence of encoder hidden states hi. On each step t, the decoder (a single-layer unidirec- tional LSTM) receives the word embedding of the previous word (while training, this is the previous word of the reference summary; at test time it is the previous word emitted by the decoder), and has decoder state st. The attention distribution at is calculated as in Bahdanau et al. (2015): et i = vT tanh(Whhi +Wsst + battn) at = softmax(et) (2) where v, Wh, Ws and battn are learnable parame- ters. The attention distribution can be viewed as 1www.github.com/abisee/pointer-generator (1)
1704.04368#7
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
8
Returning to the example in Figure 1, the task is to avoid b states until g is reached: ¬b Ug. Given an LTL specification and an environment, an agent, for example, should adopt a behavior that maximizes the probability that the specifica- tion is satisfied. One advantage of this approach is its abil- ity to specify tasks that cannot be expressed using simple reward functions (like the example MDP in Section 1.1). Indeed, in the context of reinforcement-learning problems, we have found it very natural to express standard MDP task specifications using LTL. Standard MDP tasks can be expressed well using these tem- poral operators. For example: • Goal-based tasks like mountain car (Moore, 1991): If p represents the attribute of being at the goal (the top of the hill, say), ♦p corresponds to eventually reach- ing the goal. e Avoidance-type tasks like cart pole {1983}: If q represents the attribute of being in the fail- ure state (dropping the pole, say), gq corresponds to always avoiding the failure state.
1704.04341#8
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
8
1www.github.com/abisee/pointer-generator (1) Final Distribution — “Argentina” (1 = Pgen) -————+ { X Pgen Context Vector a uonnquisig Auejnqeoo,, Attention Distribution Encoder Hidden States { + { + { + + + y Germany emerge victorious in 20 win against Argentina on <START> Germany _ beat WY $9}e1S UaPPIH Japooaq Saturday .. \y Source Text MY Partial Summary Figure 3: Pointer-generator model. For each decoder timestep a generation probability pgen ∈ [0, 1] is calculated, which weights the probability of generating words from the vocabulary, versus copying words from the source text. The vocabulary distribution and the attention distribution are weighted and summed to obtain the final distribution, from which we make our prediction. Note that out-of-vocabulary article words such as 2-0 are included in the final distribution. Best viewed in color.
1704.04368#8
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
9
• Sequence tasks like taxi (Dietterich, 2000): If p rep- resents some task being completed (getting the pas- senger, say) and q represents another task being com- pleted (delivering the passenger, say), ♦(p ∧ ♦q) cor- responds to eventually completing the first task, then, from there, eventually completing the second task. • Stabilizing tasks like pendulum swing up (Atkeson, 1994): If p represents the property that needs to be sta- bilized (the pendulum being above the vertical, say),
1704.04341#9
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
9
a probability distribution over the source words, that tells the decoder where to look to produce the next word. Next, the attention distribution is used to produce a weighted sum of the encoder hidden states, known as the context vector h;: ny =Yaihi (3) The context vector, which can be seen as a fixed- size representation of what has been read from the source for this step, is concatenated with the de- coder state s, and fed through two linear layers to produce the vocabulary distribution Pyocab: Proca = softmax(V'(V[s,,h7]+b) +b’) (4) where V, V’, b and b’ are learnable parameters. Pyocab iS a probability distribution over all words in the vocabulary, and provides us with our final distribution from which to predict words w: P(w) = Pvocab(w) (5) During training, the loss for timestep t is the neg- ative log likelihood of the target word w∗ t for that timestep: losst = − log P(w∗ t ) and the overall loss for the whole sequence is: loss = 1 T ∑T t=0 losst (7) (6) # 2.2 Pointer-generator network
1704.04368#9
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
10
Figure 2: Action a, has probability p; of a self-loop and 1—p, of transitioning to a non-g state. Action a2 has prob- ability p2 of a self-loop and 1 — pp of transitioning to a non-g state. The policy that maximizes the probability of satisfaction of Lg is highly dependent on p, and py if they are near one. Up corresponds to eventually achieving and contin- ually maintaining the desired property. • Approach-avoid tasks like the 4×3 grid (Russell & Norvig, 1994): If p represents the attribute of being at the goal (the upper right corner the grid, say), and q represents the attribute of being at a bad state (the state below it, say), ¬qUp corresponds to avoiding the bad state en route to the goal.
1704.04341#10
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
10
loss = 1 T ∑T t=0 losst (7) (6) # 2.2 Pointer-generator network Our pointer-generator network is a hybrid between our baseline and a pointer network (Vinyals et al., 2015), as it allows both copying words via point- ing, and generating words from a fixed vocabulary. In the pointer-generator model (depicted in Figure 3) the attention distribution at and context vector h∗ t are calculated as in section 2.1. In addition, the generation probability pgen ∈ [0, 1] for timestep t is calculated from the context vector h∗ t , the decoder state st and the decoder input xt:
1704.04368#10
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
11
On the other hand, there are barriers to straightfor- wardly adopting temporal logic-based languages in a reinforcement-learning setup. The most significant is that we can show that it is simply impossible to learn to satisfy classical LTL specifications in some cases. A key property for being able to learn near-optimal policies efficiently in the context of reward-based MDPs is what is known as the Simulation Lemma Informally, it says that, for any MDP and any € > 0, there exists an <’ > 0 such that finding optimal policies in an ¢’-close model of the real environment results in behavior that is €-close to optimal in the real environment. Unfortunately, tasks specified via LTL do not have this property. In particular, there is an MDP and an € > 0 such that no ¢’-close approximation for «’ > 0 is sufficient to produce a policy with e-close satisfaction probability.
1704.04341#11
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
11
t + wT where vectors wh∗, ws, wx and scalar bptr are learn- able parameters and σ is the sigmoid function. Next, pgen is used as a soft switch to choose be- tween generating a word from the vocabulary by sampling from Pvocab, or copying a word from the input sequence by sampling from the attention dis- tribution at. For each document let the extended vocabulary denote the union of the vocabulary, and all words appearing in the source document. We obtain the following probability distribution over the extended vocabulary: P(w) = pgenPvocab(w) + (1 − pgen)∑i:wi=w at Note that if w is an out-of-vocabulary (OOV) word, then Pvocab(w) is zero; similarly if w does not appear in the source document, then ∑i:wi=w at i is zero. The ability to produce OOV words is one of the primary advantages of pointer-generator models; by contrast models such as our baseline are restricted to their pre-set vocabulary. The loss function is as described in equations (6) and (7), but with respect to our modified prob- ability distribution P(w) given in equation (9). # 2.3 Coverage mechanism
1704.04368#11
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
12
Consider the MDP in Figure [2] If we want to find a be- havior that nearly maximizes the probability of satisfying the specification Lg (stay in the good state forever), we need accurate estimates of p; and po. If py = pz = 1 or pi < Land pp < 1, either policy is equally good. If py = 1 and p2 < 1, only action a; is near optimal. If pp = 1 and p,; < 1, only action ay is near optimal. As there is no finite bound on the number of learning trials needed to distinguish p; = 1 from p; < 1, a near optimal behavior cannot be found in worst-case finite time. LTL expressions are simply too unforgiving to be used with any confidence in a learning setting. In this work, we develop a hybrid approach for specify- ing behavior in reinforcement learning that combines the strengths of both reward functions and temporal logic spec- ifications. # 2 Learning To Satisfy LTL
1704.04341#12
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
12
# 2.3 Coverage mechanism Repetition is a common problem for sequence- to-sequence models (Tu et al., 2016; Mi et al., 2016; Sankaran et al., 2016; Suzuki and Nagata, 2016), and is especially pronounced when gener- ating multi-sentence text (see Figure 1). We adapt the coverage model of Tu et al. (2016) to solve the problem. In our coverage model, we maintain a coverage vector ct, which is the sum of attention distributions over all previous decoder timesteps: ct = ∑t−1 Intuitively, ct is a (unnormalized) distribution over the source document words that represents the de- gree of coverage that those words have received from the attention mechanism so far. Note that c0 is a zero vector, because on the first timestep, none of the source document has been covered. The coverage vector is used as extra input to the attention mechanism, changing equation (1) to: i = vT tanh(Whhi +Wsst + wcct et i + battn) (11)
1704.04368#12
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
13
strengths of both reward functions and temporal logic spec- ifications. # 2 Learning To Satisfy LTL While provable guarantees of efficiency and optimality have been at the core of the literature on learning (Fiechter, 1994; Kearns & Singh, 2002; Brafman & Tennenholtz, 2002; Li et al., 2011), correctness with respect to compli- cated, high-level task specifications—during the learning itself or in the behavior resulting from the learning phase— has attracted limited attention (Abbeel & Ng, 2005). # 2.1 Geometric linear temporal logic We present a variant of LTL we call geometric linear tem- poral logic (GLTL) that builds on the logical and temporal operators in LTL while ensuring learnability. The idea of GLTL is roughly to restrict the period of validity of the temporal operators to bounded windows—similar to the bounded semantics of LTL (Manna & Pnueli, 1992). To this end, GLTL introduces operators of the form of ♦µb with the atomic proposition b, which is interpreted as “b eventually holds within k time steps where k is a random variable following a geometric distribution with parameter µ.” Similar semantics stochastically restricting the window of validity for other temporal operators are also introduced.
1704.04341#13
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
13
i = vT tanh(Whhi +Wsst + wcct et i + battn) (11) where wc is a learnable parameter vector of same length as v. This ensures that the attention mecha- nism’s current decision (choosing where to attend next) is informed by a reminder of its previous decisions (summarized in ct). This should make it easier for the attention mechanism to avoid re- peatedly attending to the same locations, and thus avoid generating repetitive text. We find it necessary (see section 5) to addition- ally define a coverage loss to penalize repeatedly attending to the same locations: covlosst = ∑i min(at i, ct i) (12) Note that the coverage loss is bounded; in particu- lar covlosst ≤ ∑i at i = 1. Equation (12) differs from the coverage loss used in Machine Translation. In MT, we assume that there should be a roughly one- to-one translation ratio; accordingly the final cov- erage vector is penalized if it is more or less than 1.
1704.04368#13
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
14
This kind of geometric decay fits very nicely with MDPs for a few reasons. It can be viewed as a generalization of reward discounting, which is already present in many MDP models. It also avoids unnecessarily expanding the specifi- cation state space by only requiring extra states to represent events and not simply the passage of time. Using G1(µ) to represent the geometric distribution with parameter µ, the temporal operators are: • ♦µp: p is achieved in the next k steps, k ∼ G1(µ). e C4: q holds for the next k steps, k ~ G1 (ji). • q Uµq: q must hold at least until p becomes true, which itself must be achieved in the next k steps, k ∼ G1(µ).
1704.04341#14
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
14
Our loss function is more flexible: because sum- marization should not require uniform coverage, we only penalize the overlap between each atten- tion distribution and the coverage so far – prevent- ing repeated attention. Finally, the coverage loss, reweighted by some hyperparameter λ , is added to the primary loss function to yield a new composite loss function: losst = − log P(w∗ t ) + λ ∑i min(at i, ct i) (13) # 3 Related Work Neural abstractive summarization. Rush et al. (2015) were the first to apply modern neural net- works to abstractive text summarization, achiev- ing state-of-the-art performance on DUC-2004 and Gigaword, two sentence-level summarization datasets. Their approach, which is centered on the attention mechanism, has been augmented with re- current decoders (Chopra et al., 2016), Abstract Meaning Representations (Takase et al., 2016), hi- erarchical networks (Nallapati et al., 2016), vari- ational autoencoders (Miao and Blunsom, 2016), and direct optimization of the performance metric (Ranzato et al., 2016), further improving perfor- mance on those datasets.
1704.04368#14
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
15
• q Uµq: q must hold at least until p becomes true, which itself must be achieved in the next k steps, k ∼ G1(µ). Returning to our earlier example from Figure [2} evaluat- ing the probability of satisfaction for Og requires infinite precision in the learned transition probabilities in the envi- ronment. Consider instead evaluating O,,g in this environ- ment. An encoding of the specification for this example is shown in Figure [3] (Third). We call it a specification MDP, as it specifies the task using states (derived from the for- mula), actions (representing conditions), and probabilities (capturing the stochasticity of operator expiration). This example says that, from the initial state qo, encountering any state where g is not true results in immediately fail- ing the specification. In contrast, encountering any state where g is true results in either continued evaluation (with probability µ) or success (with probability 1 − µ). Success represents the idea that the temporal window in which g must hold true has expired without g being violated.
1704.04341#15
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
15
However, large-scale datasets for summariza- tion of longer text are rare. Nallapati et al. (2016) adapted the DeepMind question-answering dataset (Hermann et al., 2015) for summarization, result- ing in the CNN/Daily Mail dataset, and provided the first abstractive baselines. The same authors then published a neural extractive approach (Nal- lapati et al., 2017), which uses hierarchical RNNs to select sentences, and found that it significantly outperformed their abstractive result with respect to the ROUGE metric. To our knowledge, these are the only two published results on the full data- set. Prior to modern neural methods, abstractive summarization received less attention than extrac- tive summarization, but Jing (2000) explored cut- ting unimportant parts of sentences to create sum- maries, and Cheung and Penn (2014) explore sen- tence fusion using dependency trees. Pointer-generator networks. The pointer net- work (Vinyals et al., 2015) is a sequence-to- sequence model that uses the soft attention dis- tribution of Bahdanau et al. (2015) to produce an output sequence consisting of elements from
1704.04368#15
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
16
Composing these two MDPs leads to the composite MDP in Figure [3] (Fourth). The true satisfaction probability for action a; is ont Thus, if pp = .9, the dependence of this value on «€ is sh: which is well behaved for all values of ¢. The sensitivity of the computed satisfaction probability has a maximum 1/(1 — jz)? dependence on the accuracy of the estimate of «. Thus, GLTL is considerably more friendly to learning than is LTL. Returning to the MDP example in Figure 1, we find that GLTL is also more expressive than rewards. The GLTL formula ¬q Uµp can be translated to a specification MDP. Essentially, the idea is that encountering a bad state (q) even once or running out of time results in specification failure. Maximizing the satisfaction of this GLTL formula results in taking action a1 regardless of the value of p. That is, it is an environment-independent specification of the task.
1704.04341#16
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04341
17
The reason the GLTL formulation is able to succeed where standard rewards fail is that the GLTL formula results in an augmentation of the state space so that the reward function can depend on whether a bad state has yet been encoun- tered. On the first encounter, a penalty can be issued. Af- ter the first encounter, no additional penalty is added. By composing the environment MDP with this bit of internal memory, the task can be expressed provably correctly and in an environment-independent way. # 3 Related Work
1704.04341#17
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
17
Our approach is close to the Forced-Attention Sentence Compression model of Miao and Blun- som (2016) and the CopyNet model of Gu et al. (2016), with some small differences: (i) We cal- culate an explicit switch probability pgen, whereas Gu et al. induce competition through a shared soft- max function. (ii) We recycle the attention distri- bution to serve as the copy distribution, but Gu et al. use two separate distributions. (iii) When a word appears multiple times in the source text, we sum probability mass from all corresponding parts of the attention distribution, whereas Miao and Blunsom do not. Our reasoning is that (i) calcu- lating an explicit pgen usefully enables us to raise or lower the probability of all generated words or all copy words at once, rather than individually, (ii) the two distributions serve such similar pur- poses that we find our simpler approach suffices, and (iii) we observe that the pointer mechanism often copies a word while attending to multiple oc- currences of it in the source text.
1704.04368#17
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
18
# 3 Related Work Discounting has been used in previous temporal models. In quantitative temporal logic, it gives more weight to the sat- isfaction of a logic property in the near future than the far future. De Alfaro et al. (2003, 2004) augment computation tree logic (CTL) with discounting and develop fixpoint- based algorithms for checking such properties for proba- bilistic systems and games. Almagor et al. (2014) explic- itly refine the “eventually” operator of LTL to a discounting operator such that the longer it takes to fulfill the task the smaller the value of satisfaction. Further, they show that discounted LTL is more expressive than discounted CTL. They use both discounted until and undiscounted until for expressing traditional eventually as well as its discounted version. However, algorithms for model checking and syn- thesis discounted LTL for probabilistic systems and games are yet to be developed. LTL has been used extensively in robotics domains. Work on the trustworthiness of autonomous robots, automated verification and synthesis with provable correctness with respect to temporal logic-based specifications in motion, task, and mission planning have attracted considerable atTable 1: Operator precedence in specification MDP con- struction.
1704.04341#18
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
18
Our approach is considerably different from that of Gulcehre et al. (2016) and Nallapati et al. (2016). Those works train their pointer compo- nents to activate only for out-of-vocabulary words or named entities (whereas we allow our model to freely learn when to use the pointer), and they do not mix the probabilities from the copy distribu- tion and the vocabulary distribution. We believe the mixture approach described here is better for abstractive summarization – in section 6 we show that the copy mechanism is vital for accurately reproducing rare but in-vocabulary words, and in section 7.2 we observe that the mixture model en- ables the language model and copy mechanism to work together to perform abstractive copying.
1704.04368#18
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
19
Precedence Operator # of Operands 1 not, 7 1 p-always, UO), 1 2 pi-eventually, >, 1 pe-until, U,, 2 3 and, 2 4 or, V 2 tention recently. The results include open-loop and reactive control of deterministic, stochastic or non-deterministic finite-state models as well as continuous state models through appropriate finite-state abstractions (Wongpirom- sarn et al., 2012; Kress-Gazit et al., 2009; Liu et al., 2013; Wolff et al., 2012; Ding et al., 2011; Lahijanian et al., 2011; Kress-Gazit et al., 2011). While temporal logic had initially focused on reasoning about temporal and logical relations, its dialects with probabilistic modalities have been used in- creasingly for robotics applications (Baier & Katoen, 2008; De Alfaro, 1998; Kwiatkowska et al., 2002). # 4 Generating Specification MDPs
1704.04341#19
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
19
Coverage. Originating from Statistical Ma- chine Translation (Koehn, 2009), coverage was adapted for NMT by Tu et al. (2016) and Mi et al. (2016), who both use a GRU to update the cov- erage vector each step. We find that a simpler approach – summing the attention distributions to obtain the coverage vector – suffices. In this re- spect our approach is similar to Xu et al. (2015), who apply a coverage-like method to image captioning, and Chen et al. (2016), who also incorpo- rate a coverage mechanism (which they call ‘dis- traction’) as described in equation (11) into neural summarization of longer text.
1704.04368#19
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
20
# 4 Generating Specification MDPs Similar to LTL, GLTL formulas are built from a set of atomic propositions AP, Boolean operators / (conjunc- tion), — (negation) and temporal operator U,, (j:-until). Useful operators such as V (disjunction), <>, (u-eventually) and O1,, (u-always) can be derived from these basic opera- tors. GLTL formulas can be converted to the corresponding specification MDPs recursively, with the operator prece- dence listed in descending order in Table Operators of the same precedence are read from right to left. For example, Oy, Op. = (Ou Ona)» Win PUja Ps = (pitts (P2Uj2P3))Assume ϕ, ϕ1, ϕ2 are GLTL formulas in the following dis- cussion. • b, where b ∈ AP is an atomic proposition: A speci- fication MDP Mb = ({sini, acc, rej}, {a}, T, R) for b can be constructed such that, if p holds at sini, the transition (sini, a, acc) is taken with probability 1; otherwise, the transition (sini, a, rej) is taken with probability 1.
1704.04341#20
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
20
Temporal attention is a related technique that has been applied to NMT (Sankaran et al., 2016) and summarization (Nallapati et al., 2016). In this approach, each attention distribution is di- vided by the sum of the previous, which effec- tively dampens repeated attention. We tried this method but found it too destructive, distorting the signal from the attention mechanism and reducing performance. We hypothesize that an early inter- vention method such as coverage is preferable to a post hoc method such as temporal attention – it is better to inform the attention mechanism to help it make better decisions, than to override its de- cisions altogether. This theory is supported by the large boost that coverage gives our ROUGE scores (see Table 1), compared to the smaller boost given by temporal attention for the same task (Nallapati et al., 2016). # 4 Dataset
1704.04368#20
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
21
• ¬ϕ: A specification MDP M¬ϕ can be constructed from a specification MDP Mϕ by swapping the termi- nal states acc and rej. • ϕ1 ∧ ϕ2: A specification MDP Mϕ1∧ϕ2 = from spec- (S1, A1, T1, R1) a 2 Mp2 UP _. \ Tee ge TD =p! Op Foe’ (-py" se (=p) a 2 Mp2 & & UP _. \ go T qo” ; Tee ge TD , a mg =p! Op F F F Foe’ (-py" se (=p) & & go T qo” ; 7g , a mg F F F Figure 3: First: The specification MDP representation of the LTL formula b. Second: The specification MDP representation of the LTL formula Ob. Third: The specification MDP representation of the GLTL formula 1,,b. Fourth: The composition of the specification MDP representation of the GLTL formula U,,b with the MDP from Figure[2}
1704.04341#21
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
21
# 4 Dataset We use the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016), which con- tains online news articles (781 tokens on average) paired with multi-sentence summaries (3.75 sen- tences or 56 tokens on average). We used scripts supplied by Nallapati et al. (2016) to obtain the same version of the the data, which has 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. Both the dataset’s published results (Nallapati et al., 2016, 2017) use the anonymized version of the data, which has been pre-processed to replace each named entity, e.g., The United Na- tions, with its own unique identifier for the exam- ple pair, e.g., @entity5. By contrast, we operate directly on the original text (or non-anonymized version of the data),2 which we believe is the fa- vorable problem to solve because it requires no pre-processing. # 5 Experiments
1704.04368#21
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
23
• ϕ1 ∨ ϕ2 = ¬(¬ϕ1 ∧ ¬ϕ2). © y\U,,~2: The operator ji-until has two operands, 1 and y, which generate specification MDPs My, = ($1, Ai, Ti, Ri) and Mg, = (S2,A2,T>, Ro). The . : new specification MDP My,u, 9. = (8, A,T, R) is constructed from M,, and My: S = (Si\{acey,rej;}) x (Sa\{aces,rei,}) face, re}. wit acc and rej are the accepting and rejecting state, respectively, and s'"’ = (s{", si”) © S is the ini- tial state; A = A, x Ap; for all s = (51,82) € S\{acc, rej}, a = (a1, a2) € A, s € S, and sh € So, if Ty(s1,@1, 84) > 0 and To(s2, a2, 8) > 0, a tran- sition (s, a, s’) is added to My,u,,, with probability T(s, a, s’) as specified in Table|2|
1704.04341#23
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
23
abstractive model (Nallapati et al., 2016)* seq-to-seq + attn baseline (150k vocab) seq-to-seq + attn baseline (50k vocab) pointer-generator pointer-generator + coverage lead-3 baseline (ours) lead-3 baseline (Nallapati et al., 2017)* extractive model (Nallapati et al., 2017)* 1 35.46 30.49 31.33 36.44 39.53 40.34 39.2 39.6 ROUGE 2 13.30 11.17 11.81 15.66 17.28 17.70 15.7 16.2 L 32.65 28.08 28.83 33.42 36.38 36.57 35.5 35.3 METEOR exact match + stem/syn/para - 11.65 12.03 15.35 17.32 20.48 - - - 12.86 13.20 16.65 18.72 22.21 - Table 1: ROUGE F1 and METEOR scores on the test set. Models and baselines in the top half are abstractive, while those in the bottom half are extractive. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our
1704.04368#23
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
24
Here are some intuitions behind the construction of T . The formula ϕ1Uµϕ2 means that, within some stochastically decided time period k, we would like to successfully implement task ϕ2 in at most k steps without ever failing in task ϕ1. If we observe a suc- cess in Mϕ2 (that is, the specification reaches acc2) before ϕ1 fails (that is the sepcification reaches rej1), Mϕ1Uµϕ2 goes to state acc for sure; if we observe a failure in Mϕ1 (that, the specifcation reaches rej1) before succeeding in Mϕ2 (that is, the specification reaches acc2), Mϕ1Uµϕ2 goes to state rej for sure. In all other cases, Mϕ1Uµϕ2 primarily keeps track of the transitions in Mϕ1 and Mϕ2 , with a tiny probability of failing immediately, which corresponds to the opera- tor expiring. |
1704.04341#24
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
24
extractive. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% confidence interval of at most ±0.25 as reported by the official ROUGE script. The METEOR improvement from the 50k baseline to the pointer-generator model, and from the pointer-generator to the pointer-generator+coverage model, were both found to be statistically significant using an approximate randomization test with p < 0.01.
1704.04368#24
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
25
| _. Table 2: Transition (s, 4,8 ")in My,uje2 constructed from 9 tansition (s1,@1, 8) in My, anda transition (52, a2, $4) in M,,. Here, p(s’|s1, 83) = ee ee That is, to get the transition probability, multiple the p column by the corresponding T; and T> transition probabilities. ; ; , — = = . P(S'|12 82) ace, acca wes) 1 accy Tejy 172 # a B acc, S2\{acc2, rejn} (si $3) =F 7 rey i Teh _aee2 ace | gt ——_|_S2 aeea} re) ! 1 accr, rej, } accy acc 1 Ca) a S1\{accy, rej, } rej Tah m7 << eon Si\{acer, rej} | So\{acc2, reja} rej m
1704.04341#25
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
25
a smaller vocabulary size than Nallapati et al.’s (2016) 150k source and 60k target vocabularies. For the baseline model, we also try a larger vocab- ulary size of 150k. Note that the pointer and the coverage mecha- nism introduce very few additional parameters to the network: for the models with vocabulary size 50k, the baseline model has 21,499,600 parame- ters, the pointer-generator adds 1153 extra param- eters (wh∗, ws, wx and bptr in equation 8), and cov- erage adds 512 extra parameters (wc in equation 11). Unlike Nallapati et al. (2016), we do not pre- train the word embeddings – they are learned from scratch during training. We train using Ada- grad (Duchi et al., 2011) with learning rate 0.15 and an initial accumulator value of 0.1. (This was found to work best of Stochastic Gradient Descent, Adadelta, Momentum, Adam and RM- SProp). We use gradient clipping with a maximum gradient norm of 2, but do not use any form of reg- ularization. We use loss on the validation set to implement early stopping.
1704.04368#25
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
26
MDP Mϕ2 = (S2, A2, T2, R2) for ϕ2, we can con- struct a specification MDP M♦µϕ2 = (S, A, T, R) for ♦µϕ2: S = S2, sini = sini 2 , acc = acc2, rej = rej2; A = A2; transitions of M♦µϕ2 are modified from those of Mϕ2 as in Table 3. Informally, ♦µϕ2 is sat- isfied if we succeed in task ϕ2 within the stochastic observation time period. did not witness a failure of yo within the stochastic observation time period. The transitions of a specifi- cation MDP M, can be constructed from Table[3} or directly from Table(| e Ove: pralways yo is equivalent to -O,7y2 = 7(,(>y2)). In other words, 0,2 is satisfied if we Using the transitions as described, a given GLTL formula can be converted into a specification MDP. To satisfy the specification in a given environment, a joint MDP is created as follows:
1704.04341#26
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
26
During training and at test time we truncate the article to 400 tokens and limit the length of the summary to 100 tokens for training and 120 to- kens at test time.3 This is done to expedite train- ing and testing, but we also found that truncating the article can raise the performance of the model 3The upper limit of 120 is mostly invisible: the beam search algorithm is self-stopping and almost never reaches the 120th step. (see section 7.1 for more details). For training, we found it efficient to start with highly-truncated sequences, then raise the maximum length once converged. We train on a single Tesla K40m GPU with a batch size of 16. At test time our summaries are produced using beam search with beam size 4.
1704.04368#26
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
27
• ♦µϕ2: As in the semantics of LTL, µ-eventually ♦µϕ2 = True Uµϕ2. Hence, given a specification 1. Take the cross product of the MDP representing the environment and the specification MDP. Table 3: Transition (s,a, s’) in My,,y2 constructed from a transition (s2,a2,s5) in M,,. As above, p(s’|s5) = T(s,a,8') Ta(s2,02,85)° % Tes) acc acca 1 . sy" l-p rela rejy LU S2\{acco, rejy} 82 1-H TS rein |e constructed from Table 4: Transition (s, a, s') in Mp, U,, a transition (s2,a2,s4) in M,,. As above, p(s’|s)) = T(s,a,8') Ta(s2,02,85)" a PCA) Tt — acc 52 1-# acc a Trejo réjy 1 J . 89 l-p S2\{acco, rejy} Ween fm
1704.04341#27
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
27
We trained both our baseline models for about 600,000 iterations (33 epochs) – this is similar to the 35 epochs required by Nallapati et al.’s (2016) best model. Training took 4 days and 14 hours for the 50k vocabulary model, and 8 days 21 hours for the 150k vocabulary model. We found the pointer-generator model quicker to train, re- quiring less than 230,000 training iterations (12.8 In par- epochs); a total of 3 days and 4 hours. ticular, the pointer-generator model makes much quicker progress in the early phases of training. To obtain our final coverage model, we added the coverage mechanism with coverage loss weighted to λ = 1 (as described in equation 13), and trained for a further 3000 iterations (about 2 hours). In this time the coverage loss converged to about 0.2, down from an initial value of about 0.5. We also tried a more aggressive value of λ = 2; this re- duced coverage loss but increased the primary loss function, thus we did not use it.
1704.04368#27
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
28
a PCA) Tt — acc 52 1-# acc a Trejo réjy 1 J . 89 l-p S2\{acco, rejy} Ween fm 2. Any state that corresponds to an accepting or rejecting state of the specification MDP becomes a sink state. However, the accepting states also include a reward of +1. 3. The resulting MDP is solved to create a policy. The resulting policy is one that maximizes the probability of satisfying the given formula where the random events are both the transitions in the environment and the stochas- tic transitions in the specification MDP. Such policies tend to prefer satisfying formulas quickly, as that increases the chance of successful completion before operators expire. # 5 Example Domain Consider the following formula: (¬blue Uµred) ∧ (♦µ(red ∧ ♦µgreen)). It specifies a task of reaching a red state without encounter- ing a blue state and, once a red state is reached, going to a green state.
1704.04341#28
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
28
We tried training the coverage model without the loss function, hoping that the attention mech- anism may learn by itself not to attend repeatedly to the same locations, but we found this to be inef- fective, with no discernible reduction in repetition. We also tried training with coverage from the first iteration rather than as a separate training phase, but found that in the early phase of training, the coverage objective interfered with the main objec- tive, reducing overall performance. # 6 Results # 6.1 Preliminaries Our results are given in Table 1. We evalu- ate our models with the standard ROUGE metric (Lin, 2004b), reporting the F1 scores for ROUGE- 1, ROUGE-2 and ROUGE-L (which respectively measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the summary to be evaluated). We obtain our ROUGE scores using the pyrouge package.4 We also evaluate with the METEOR metric (Denkowski and Lavie, 2014), both in ex- act match mode (rewarding only exact matches between words) and full mode (which addition- ally rewards matching stems, synonyms and para- phrases).5
1704.04368#28
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
29
It specifies a task of reaching a red state without encounter- ing a blue state and, once a red state is reached, going to a green state. Figure 5 illustrates a grid world environment in which this task can be carried out. It consists of different colored grid cells. The agent can move to one of the four adject cells to its current position with a north, south, east, or west action. However, selecting an action for one direction has a 0.02 probability of moving in the one of the three other direc- tions. This stochastic movement causes the agent to keep i Figure 4: The optimal path in a grid world. its distance from dangerous grid cells that could result in task failure, whenever possible. The solid line in the figure traces the path of the optimal policy of following this spec- ification in the grid. As can be seen, the agent moves to red and then green. Note that this behavior can be very difficult to encode in a standard reward function as both green and red need to be given positive reward and therefore either would be a sensible place for the agent to stop.
1704.04341#29
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04341
30
Figure 5 illustrates a grid world environment in which the blue cells create a partial barrier between the red and green cells. As a result of the “until” in the specification, the agent goes around the blue wall to get to the red cell. How- ever, since the prohibition against blue cells is lifted once the red cell is reached, it goes directly through the barrier to reach green. These 25-state environments become 98-state MDPs when combined with the specification MDP. # 6 Conclusion In contrast to standard MDP reward functions, we have pro- vided an environment-independent specification for tasks. We have shown that this specification language can capture standard tasks used in the MDP community and that it can be automatically incorporated into an environment MDP to create a fixed MDP to solve. Maximizing reward in this resulting MDP maximizes the probability of satisfying the task specification. Future work includes inverse reinforcement learning of task specifications and techniques for accelerating planning.
1704.04341#30
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
30
Given that we generate plain-text summaries but Nallapati et al. (2016; 2017) generate anonymized summaries (see Section 4), our ROUGE scores are not strictly comparable. There is evidence to suggest that the original-text dataset may re- sult in higher ROUGE scores in general than the anonymized dataset – the lead-3 baseline is higher on the former than the latter. One possible expla- nation is that multi-word named entities lead to a higher rate of n-gram overlap. Unfortunately, ROUGE is the only available means of compar- ison with Nallapati et al.’s work. Nevertheless, given that the disparity in the lead-3 scores is (+1.1 ROUGE-1, +2.0 ROUGE-2, +1.1 ROUGE- L) points respectively, and our best model scores exceed Nallapati et al. (2016) by (+4.07 ROUGE- 1, +3.98 ROUGE-2, +3.73 ROUGE-L) points, we may estimate that we outperform the only previous abstractive system by at least 2 ROUGE points all- round.
1704.04368#30
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
31
Figure 5: The optimal path in a slightly more complex grid world. # References Abbeel, Pieter and Ng, Andrew Y. Exploration and appren- ticeship learning in reinforcement learning. In Proceed- ings of the 22nd International Conference on Machine Learning, pp. 1–8, 2005. Almagor, Shaull, Boker, Udi, and Kupferman, Orna. Dis- counting in ltl. In Tools and Algorithms for the Construc- tion and Analysis of Systems, pp. 424–439. Springer, 2014. Atkeson, Christopher G. Using local trajectory optimiz- ers to speed up global optimization in dynamic program- In Advances in Neural Information Processing ming. Systems, pp. 663–663, 1994. Bacchus, Fahiem, Boutilier, Craig, and Grove, Adam. Re- warding behaviors. In Proceedings of the Thirteenth Na- tional Conference on Artificial Intelligence, pp. 1160– 1167. AAAI Press/The MIT Press, 1996. Baier, Christel and Katoen, Joost-Pieter. Principles of Model Checking. MIT Press, 2008.
1704.04341#31
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
31
4pypi.python.org/pypi/pyrouge/0.1.3 5www.cs.cmu.edu/~alavie/METEOR 6www.github.com/abisee/pointer-generator s e t a c i l p u d 30 20 e r a 10 t a h t % 0 1 - g r a m s 2 - g r a m s 3 - g r a m s 4 - g r a m s s e n t e n c e s pointer-generator, no coverage pointer-generator + coverage reference summaries Figure 4: Coverage eliminates undesirable repe- tition. Summaries from our non-coverage model contain many duplicated n-grams while our cover- age model produces a similar number as the ref- erence summaries. # 6.2 Observations
1704.04368#31
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
32
Baier, Christel and Katoen, Joost-Pieter. Principles of Model Checking. MIT Press, 2008. Barto, Andrew G., Sutton, Richard S., and Anderson, Charles W. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(5):834– 846, 1983. Boutilier, Craig, Dean, Thomas, and Hanks, Steve. Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1–94, 1999. Brafman, Ronen I. and Tennenholtz, Moshe. R-MAX—a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Re- search, 3:213–231, 2002. De Alfaro, Luca. Formal verification of probabilistic sys- tems. PhD thesis, Stanford University, Stanford, CA, USA, 1998. De Alfaro, Luca, Henzinger, Thomas A, and Majumdar, Rupak. Discounting the future in systems theory. In Au- tomata, Languages and Programming, pp. 1022–1037. Springer, 2003.
1704.04341#32
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
32
# 6.2 Observations We find that both our baseline models perform poorly with respect to ROUGE and METEOR, and in fact the larger vocabulary size (150k) does not seem to help. Even the better-performing baseline (with 50k vocabulary) produces summaries with several common problems. Factual details are fre- quently reproduced incorrectly, often replacing an uncommon (but in-vocabulary) word with a more- common alternative. For example in Figure 1, the baseline model appears to struggle with the rare word thwart, producing destabilize instead, which leads to the fabricated phrase destabilize nigeria’s economy. Even more catastrophically, the summaries sometimes devolve into repetitive nonsense, such as the third sentence produced by the baseline model in Figure 1. In addition, the baseline model can’t reproduce out-of-vocabulary words (such as muhammadu buhari in Figure 1). Further examples of all these problems are pro- vided in the supplementary material. Our pointer-generator model achieves much better ROUGE and METEOR scores than the baseline, despite many fewer training epochs. The difference in the summaries is also marked: out- of-vocabulary words are handled easily, factual details are almost always copied correctly, and there are no fabrications (see Figure 1). However, repetition is still very common.
1704.04368#32
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
33
De Alfaro, Luca, Faella, Marco, Henzinger, Thomas A, Majumdar, Rupak, and Stoelinga, Mari¨elle. Model checking discounted temporal properties. Springer, 2004. Dietterich, Thomas G. Hierarchical reinforcement learn- ing with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13:227–303, 2000. URL citeseer.ist.psu.edu/article/ dietterich00hierarchical.html. Ding, Xu Chu, Smith, Stephen L., Belta, Calin, and Rus, Daniela. in uncertain environ- ments with probabilistic satisfaction guarantees. CoRR, abs/1104.1159, 2011. Fiechter, Claude-Nicolas. Efficient reinforcement learning. In Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, pp. 88–97. Associ- ation of Computing Machinery, 1994. Kearns, Michael and Singh, Satinder. Near-optimal rein- forcement learning in polynomial time. In Proceedings of the 15th International Conference on Machine Learn- ing, pp. 260–268, 1998. URL citeseer.nj.nec. com/kearns98nearoptimal.html.
1704.04341#33
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
33
Our pointer-generator model with coverage im- proves the ROUGE and METEOR scores further, convincingly surpassing the best abstractive model Article: smugglers lure arab and african migrants by offer- ing discounts to get onto overcrowded ships if people bring more potential passengers, a cnn investigation has revealed. (...) Summary: cnn investigation uncovers the business inside a human smuggling ring. Article: eyewitness video showing white north charleston police officer michael slager shooting to death an unarmed black man has exposed discrepancies in the reports of the first officers on the scene. (...) Summary: more questions than answers emerge in con- troversial s.c. police shooting. Figure 5: Examples of highly abstractive reference summaries (bold denotes novel words). of Nallapati et al. (2016) by several ROUGE points. Despite the brevity of the coverage train- ing phase (about 1% of the total training time), the repetition problem is almost completely elimi- nated, which can be seen both qualitatively (Figure 1) and quantitatively (Figure 4). However, our best model does not quite surpass the ROUGE scores of the lead-3 baseline, nor the current best extrac- tive model (Nallapati et al., 2017). We discuss this issue in section 7.1. # 7 Discussion
1704.04368#33
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
34
Kearns, Michael J. and Singh, Satinder P. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2–3):209–232, 2002. Kress-Gazit, H., Fainekos, G.E., and Pappas, G.J. Temporal-logic-based reactive mission and motion plan- ning. IEEE Tans. on Robotics, 25:1370–1381, 2009. Kress-Gazit, H., Wongpiromsarn, T., and Topcu, U. Cor- rect, reactive robot control from abstraction and tempo- ral logic specifications. IEEE RAM, 18:65–74, 2011. Kwiatkowska, Marta, Norman, Gethin, and Parker, David. Prism: Probabilistic symbolic model checker. In Com- puter Performance Evaluation: Modelling Techniques and Tools, volume 2324, pp. 113–140. Springer, 2002. Lahijanian, M., Andersson, S. B., and Belta, C. Control of Markov decision processes from PCTL specifications. In Proc. of the American Control Conference, pp. 311–316, 2011.
1704.04341#34
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
34
# 7 Discussion # 7.1 Comparison with extractive systems It is clear from Table 1 that extractive systems tend to achieve higher ROUGE scores than abstractive, and that the extractive lead-3 baseline is extremely strong (even the best extractive system beats it by only a small margin). We offer two possible ex- planations for these observations. Firstly, news articles tend to be structured with the most important information at the start; this partially explains the strength of the lead-3 base- line. Indeed, we found that using only the first 400 tokens (about 20 sentences) of the article yielded significantly higher ROUGE scores than using the first 800 tokens.
1704.04368#34
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
35
Li, Lihong, Littman, Michael L., Walsh, Thomas J., and Strehl, Alexander L. Knows what it knows: A frame- work for self-aware learning. Machine Learning, 82(3): 399–443, 2011. Liu, Jun, Ozay, Necmiye, Topcu, Ufuk, and Murray, Richard M. Synthesis of reactive switching protocols from temporal logic specifications. IEEE Transactions on Automatic Control, 58(7):1771–1785, 2013. Manna, Zohar and Pnueli, Amir. The Temporal Logic of Reactive & Concurrent Sys. . Springer, 1992. Moore, Andrew W. Variable resolution dynamic program- ming: Efficiently learning action maps in multivariate In Proc. Eighth International Ma- real-valued spaces. chine Learning Workshop, 1991. Puterman, Martin L. Markov Decision Processes— Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, 1994. Russell, Stuart J. and Norvig, Peter. Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ, 1994. ISBN 0-13-103805-2.
1704.04341#35
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
35
Secondly, the nature of the task and the ROUGE metric make extractive approaches and the lead- 3 baseline difficult to beat. The choice of con- tent for the reference summaries is quite subjective – sometimes the sentences form a self-contained summary; other times they simply showcase a few interesting details from the article. Given that the articles contain 39 sentences on average, there are many equally valid ways to choose 3 or 4 high- lights in this style. Abstraction introduces even more options (choice of phrasing), further decreasing the likelihood of matching the reference sum- mary. For example, smugglers profit from des- perate migrants is a valid alternative abstractive summary for the first example in Figure 5, but it scores 0 ROUGE with respect to the reference summary. This inflexibility of ROUGE is exac- erbated by only having one reference summary, which has been shown to lower ROUGE’s relia- bility compared to multiple reference summaries (Lin, 2004a).
1704.04368#35
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04341
36
Strehl, Alexander L., Li, Lihong, and Littman, Michael L. Reinforcement learning in finite MDPs: PAC analysis. Journal of Machine Learning Research, 10:2413–2444, 2009. Watkins, Christopher J. C. H. Learning from Delayed Re- wards. PhD thesis, King’s College, Cambridge, UK, 1989. Wolff, Eric M., Topcu, Ufuk, and Murray, Richard M. Ro- bust control of uncertain markov decision processes with temporal logic specifications. In Proc. of the IEEE Con- ference on Decision and Control, 2012. Wongpiromsarn, T., Topcu, U., and Murray, R.M. Reced- ing horizon temporal logic planning. IEEE T. on Auto- matic Control, 57:2817–2830, 2012.
1704.04341#36
Environment-Independent Task Specifications via GLTL
We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcement-learning tasks straightforwardly.
http://arxiv.org/pdf/1704.04341
Michael L. Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, James MacGlashan
cs.AI
null
null
cs.AI
20170414
20170414
[]
1704.04368
36
Due to the subjectivity of the task and thus the diversity of valid summaries, it seems that ROUGE rewards safe strategies such as select- ing the first-appearing content, or preserving orig- inal phrasing. While the reference summaries do sometimes deviate from these techniques, those deviations are unpredictable enough that the safer strategy obtains higher ROUGE scores on average. This may explain why extractive systems tend to obtain higher ROUGE scores than abstractive, and even extractive systems do not significantly ex- ceed the lead-3 baseline. To explore this issue further, we evaluated our systems with the METEOR metric, which rewards not only exact word matches, but also matching stems, synonyms and paraphrases (from a pre- defined list). We observe that all our models re- ceive over 1 METEOR point boost by the inclu- sion of stem, synonym and paraphrase matching, indicating that they may be performing some ab- straction. However, we again observe that the lead-3 baseline is not surpassed by our models. It may be that news article style makes the lead- 3 baseline very strong with respect to any metric. We believe that investigating this issue further is an important direction for future work. # 7.2 How abstractive is our model?
1704.04368#36
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
37
# 7.2 How abstractive is our model? We have shown that our pointer mechanism makes our abstractive system more reliable, copying fac- tual details correctly more often. But does the ease of copying make our system any less abstractive? Figure 6 shows that our final model’s sum- maries contain a much lower rate of novel n-grams (i.e., those that don’t appear in the article) than the reference summaries, indicating a lower degree of abstraction. Note that the baseline model produces novel n-grams more frequently – however, this statistic includes all the incorrectly copied words, UNK tokens and fabrications alongside the good instances of abstraction. l e v o n e r a t a h t % 100 80 60 40 20 0 1 - g r a m s 2 - g r a m s 3 - g r a m s 4 - g r a m s s e n t e n c e s pointer-generator + coverage sequence-to-sequence + attention baseline reference summaries
1704.04368#37
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
38
Figure 6: Although our best model is abstractive, it does not produce novel n-grams (i.e., n-grams that don’t appear in the source text) as often as the reference summaries. The baseline model produces more novel n-grams, but many of these are erroneous (see section 7.2). Article: andy murray (...) is into the semi-finals of the mi- ami open , but not before getting a scare from 21 year-old austrian dominic thiem, who pushed him to 4-4 in the sec- ond set before going down 3-6 6-4, 6-1 in an hour and three quarters. (...) Summary: andy murray defeated dominic thiem 3-6 6-4, 6-1 in an hour and three quarters. Article: (...) wayne rooney smashes home during manch- ester united ’s 3-1 win over aston villa on saturday. (...) Summary: manchester united beat aston villa 3-1 at old trafford on saturday. Figure 7: Examples of abstractive summaries pro- duced by our model (bold denotes novel words).
1704.04368#38
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
39
Figure 7: Examples of abstractive summaries pro- duced by our model (bold denotes novel words). In particular, Figure 6 shows that our final model copies whole article sentences 35% of the time; by comparison the reference summaries do so only 1.3% of the time. This is a main area for improvement, as we would like our model to move beyond simple sentence extraction. However, we observe that the other 65% encompasses a range of abstractive techniques. Article sentences are trun- cated to form grammatically-correct shorter ver- sions, and new sentences are composed by stitch- ing together fragments. Unnecessary interjections, clauses and parenthesized phrases are sometimes omitted from copied passages. Some of these abil- ities are demonstrated in Figure 1, and the supple- mentary material contains more examples. Figure 7 shows two examples of more impres- sive abstraction — both with similar structure. The dataset contains many sports stories whose sum- maries follow the X beat Y (score) on (day) template, which may explain why our model is most confidently abstractive on these examples. In gen- eral however, our model does not routinely pro- duce summaries like those in Figure 7, and is not close to producing summaries like in Figure 5.
1704.04368#39
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
40
The value of the generation probability pgen also gives a measure of the abstractiveness of our model. During training, pgen starts with a value of about 0.30 then increases, converging to about 0.53 by the end of training. This indicates that the model first learns to mostly copy, then learns to generate about half the time. However at test time, pgen is heavily skewed towards copying, with a mean value of 0.17. The disparity is likely due to the fact that during training, the model re- ceives word-by-word supervision in the form of the reference summary, but at test time it does not. Nonetheless, the generator module is use- ful even when the model is copying. We find that pgen is highest at times of uncertainty such as the beginning of sentences, the join between stitched-together fragments, and when producing periods that truncate a copied sentence. Our mix- ture model allows the network to copy while si- multaneously consulting the language model – en- abling operations like stitching and truncation to In any case, be performed with grammaticality. encouraging the pointer-generator model to write more abstractively, while retaining the accuracy advantages of the pointer module, is an exciting direction for future work. # 8 Conclusion
1704.04368#40
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
41
# 8 Conclusion In this work we presented a hybrid pointer- generator architecture with coverage, and showed that it reduces inaccuracies and repetition. We ap- plied our model to a new and challenging long- text dataset, and significantly outperformed the abstractive state-of-the-art result. Our model ex- hibits many abstractive abilities, but attaining higher levels of abstraction remains an open re- search question. # 9 Acknowledgment We thank the ACL reviewers for their helpful com- ments. This work was begun while the first author was an intern at Google Brain and continued at Stanford. Stanford University gratefully acknowl- edges the support of the DARPA DEFT Program AFRL contract no. FA8750-13-2-0040. Any opin- ions in this material are those of the authors alone. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks In International Joint for modeling documents. Conference on Artificial Intelligence.
1704.04368#41
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
42
Jackie Chi Kit Cheung and Gerald Penn. 2014. Unsu- pervised sentence enhancement for automatic sum- marization. In Empirical Methods in Natural Lan- guage Processing. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with at- In North Amer- tentive recurrent neural networks. ican Chapter of the Association for Computational Linguistics. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation In EACL 2014 Workshop for any target language. on Statistical Machine Translation. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning Journal of Machine and stochastic optimization. Learning Research 12:2121–2159. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Incorporating copying mechanism in In Association for Li. 2016. sequence-to-sequence learning. Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing In Association for Computa- the unknown words. tional Linguistics.
1704.04368#42
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
43
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Neural Informa- tion Processing Systems. Hongyan Jing. 2000. Sentence reduction for automatic In Applied natural language text summarization. processing. Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In International ACM SIGIR conference on Research and develop- ment in information retrieval. Looking for a few good metrics: Automatic summarization evaluation-how In NACSIS/NII Test many samples are enough? Collection for Information Retrieval (NTCIR) Work- shop. Chin-Yew Lin. 2004b. Rouge: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out: ACL workshop. Stephen Merity, Caiming Xiong, James Bradbury, and Pointer sentinel mixture Richard Socher. 2016. In NIPS 2016 Workshop on Multi-class models. and Multi-label Learning in Extremely Large Label Spaces.
1704.04368#43
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
44
Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Empirical Methods in Natural Language Processing. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Empirical Methods in Natu- ral Language Processing. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Association for the Advancement of Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ aglar Gulc¸ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Computational Natural Lan- guage Learning. Chris D Paice. 1990. Constructing literature abstracts by computer: techniques and prospects. Information Processing & Management 26(1):171–186. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In International Conference on Learning Representations.
1704.04368#44
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
45
Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Empirical Methods in Nat- ural Language Processing. Horacio Saggion and Thierry Poibeau. 2013. Auto- matic text summarization: Past, present and future. In Multi-source, Multilingual Information Extrac- tion and Summarization, Springer, pages 3–21. Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. 2016. Temporal attention model arXiv preprint for neural machine translation. arXiv:1608.02927 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Neural Information Processing Systems. Jun Suzuki and Masaaki Nagata. 2016. RNN-based encoder-decoder approach with word frequency es- timation. arXiv preprint arXiv:1701.00138 . Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hi- rao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Empirical Methods in Natural Language Process- ing.
1704.04368#45
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
46
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural In Association for Computa- machine translation. tional Linguistics. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Neural Information Pro- cessing Systems. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning. Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Efficient summarization with arXiv preprint Urtasun. 2016. read-again and copy mechanism. arXiv:1611.03382 . # Supplementary Material This appendix provides examples from the test set, with side-by-side comparisons of the reference sum- maries and the summaries produced by our models. In each example: • italics denote out-of-vocabulary words • red denotes factual errors in the summaries • green shading intensity represents the value of the generation probability pgen
1704.04368#46
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
48
his wedding next week before ensuring that he still has unfinished tennis business to attend to . the world no 4 is into the semi-finals of the miami open , but not before getting a scare from 21 year-old austrian dominic thiem , who pushed him to 4-4 in the second set before going down 3-6 6-4 , 6-1 in an hour and three quarters . murray was awaiting the winner from the last eight match between tomas berdych and argentina ’s juan monaco . prior to this tournament thiem lost in the second round of a challenger event to soon-to-be new brit aljaz bedene . andy murray pumps his first after defeating dominic thiem to reach the miami open semi finals . muray throws his sweatband into the crowd after completing a 3-6 , 6-4 , 6-1 victory in florida . murray shakes hands with thiem who he described as a ’ strong guy ’ after the game . and murray has a fairly simple message for any of his fellow british tennis players who might be agitated about his imminent arrival into the home ranks : do n’t complain
1704.04368#48
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
50
Reference Summary: british no 1 defeated dominic thiem in miami open quarter finals . andy murray celebrated his 500th career win in the previous round . third seed will play the winner of tomas berdych and juan monaco in the semi finals of the atp masters 1000 event in key biscayne . Baseline: andy murray beat UNK bedene 6-3 , 6-4 , 6-1 in an hour and three quarters . british no 1 believes his colleagues should use the maze of the world number 83 , originally from slovenia , as motivation to better themselves . Pointer-Generator, No Coverage: andy murray came close to giving himself some extra preparation for his wedding next week before getting a scare from 21 year-old austrian dominic thiem . the world no 4 is into the semi-finals of the miami open , but not before getting a scare from 21 year-old austrian dominic thiem . Pointer-Generator, With Coverage: andy murray defeated dominic thiem 3-6 6-4 , 6-1 in an hour and three quarters . murray was awaiting the winner from the last eight match between tomas berdych and argentina ’s juan monaco .
1704.04368#50
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
51
murray was awaiting the winner from the last eight match between tomas berdych and argentina ’s juan monaco . prior to this tournament thiem lost in the second round of a challenger event to soon-to-be new brit aljaz bedene . Figure 8: The baseline model reports the wrong score 6-3, substitutes bedene for thiem and strug- gles with the uncommon word assimilation. The pointer-network models accurately reproduce the out- of-vocabulary words thiem and aljaz. Note that the final model produces the novel word defeated to incorporate several fragments into a single sentence.
1704.04368#51
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
52
Article (truncated): munster have signed new zealand international francis saili on a two-year deal . utility back saili , who made his all blacks debut against argentina in 2013 , will move to the province later this year after the completion of his 2015 contractual commitments . the 24-year-old currently plays for auckland-based super rugby side the blues and was part of the new zealand under-20 side that won the junior world championship in italy in 2011 . saili ’s signature is something of a coup for munster and head coach anthony foley believes he will be a great addition to their backline . francis saili has signed a two-year deal to join munster and will link up with them later this year . ’ we are really pleased that francis has committed his future to the province , ’ foley told munster ’s official website . ’ he is a talented centre with an impressive skill-set and he possesses the physical attributes to excel in the northern hemisphere . ’ i believe he will be a great addition to our backline and we look forward to welcoming him to munster . ’ saili has been capped twice by new zealand and was part of the under 20 side that won the junior championship in 2011 .
1704.04368#52
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
53
Reference Summary: utility back francis saili will join up with munster later this year . the new zealand international has signed a two-year contract . saili made his debut for the all blacks against argentina in 2013 . Baseline: dutch international francis UNK has signed a two-year deal to join irish UNK super rugby side the blues . UNK ’s signature is something of a coup for munster and his head coach anthony foley believes he will be a great addition to their respective prospects . UNK has been capped twice by new zealand . Pointer-Generator, No Coverage: new zealand international francis saili will move to the province later this year . utility back saili made his all blacks debut against argentina in 2013 . utility back saili will move to the province later this year . Pointer-Generator, With Coverage: francis saili has signed a two-year deal to join munster later this year . the 24-year-old was part of the new zealand under-20 side that won the junior world championship in italy in 2011 . saili ’s signature is something of a coup for munster and head coach anthony foley .
1704.04368#53
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]
1704.04368
54
championship in italy in 2011 . saili ’s signature is something of a coup for munster and head coach anthony foley . Figure 9: The baseline model incorrectly substitutes dutch for new zealand (perhaps reflecting the Euro- pean bias of the dataset), fabricates irish, and struggles with out-of-vocabulary words saili and auckland- based. Though it is not clear why, the phrase addition to our backline is changed to the nonsensical addition to their respective prospects. The pointer-generator model fixes these accuracy problems, and the addition of coverage fixes the repetition problem. Note that the final model skips over large passages of text to produce shorter sentences. Article (truncated): right from the moment he breezed through the doors at old trafford , louis
1704.04368#54
Get To The Point: Summarization with Pointer-Generator Networks
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
http://arxiv.org/pdf/1704.04368
Abigail See, Peter J. Liu, Christopher D. Manning
cs.CL
Add METEOR evaluation results, add some citations, fix some equations (what are now equations 1, 8 and 11 were missing a bias term), fix url to pyrouge package, add acknowledgments
null
cs.CL
20170414
20170425
[ { "id": "1701.00138" }, { "id": "1611.03382" }, { "id": "1608.02927" } ]