doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.08817 | 24 | 6
(a) Number of demonstration trajectories. (b) Real robot experiment.
PUWNe ââ 100 Environment reward 10 20 30 40 50 60 70 80 Environment interaction time / min
78950 10050200250. 300, Environment interaction time / min . â Sparse reward with demonstrations â Shaped reward without demonstrations
(a) Learning curves for DDPGfD on the clip insertion task with varying amounts of Figure 4: demonstration data. DDPGfD can learn solve the sparse-reward task given only a single trajectory from a human demonstrator. (b) Performance from 2 runs on a real robot. DDPGfD learns faster than DDPG and without the engineered reward function.
demonstrations. This was surprising, since each demonstration contains only one state transition with non-zero reward.
Finally, we show results of DDPGfD learning the clip insertion task on physical Sawyer robot in Figure 4(b). DDPGfD was able to learn a robust insertion policy on the real robot. DDPGfD with sparse rewards outperforms shaped DDPG, showing that DDPGfD achieves faster learning without the extra engineering.
A video demonstrating the performance can be viewed here: https://www.youtube.com/watch? v=WGJwLfeVN9w
# 6 Related work | 1707.08817#24 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 25 | A video demonstrating the performance can be viewed here: https://www.youtube.com/watch? v=WGJwLfeVN9w
# 6 Related work
Imitation learning is primarily concerned with matching expert demonstrations. Our work combines imitation learning with learning from task rewards, so that the agent is able to improve upon the demonstrations it has seen. Imitation learning can be cast into a supervised learning problem (like classiï¬cation) [10, 11]. One popular imitation learning algorithm is DAGGER [12] which iteratively produces new policies based on polling the expert policy outside its original state space. This leads to no-regret over validation data in the online learning sense. DAGGER requires the expert to be available during training to provide additional feedback to the agent. | 1707.08817#25 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 26 | Imitation can also been achieved through inverse optimal control or inverse RL. The main principle is to learn a cost or a reward function under which the demonstration data is optimal. For instance, in [16, 17] the inverse RL problem is cast into a two-player zero-sum game where one player chooses policies and the other chooses reward functions. However, it doesnât scale to continuous state-action spaces and requires knowledge of the dynamics. To address continuous state spaces and unknown dynamics, [5] solve inverse RL by combining classiï¬cation and regression. Yet it is restricted to discrete action spaces. Demonstrations have also been used for inverse optimal control in high- dimensional, continuous robotic control problems [1]. However, these approaches only do imitation learning and do not allow for learning from task rewards. | 1707.08817#26 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 27 | Guided Cost Learning (GCL) [1] and Generative Adversarial Imitation Learning (GAIL) [4] are the ï¬rst efï¬cient imitation learning algorithms to learn from high-dimensional inputs without knowledge of the dynamics and hand-crafted features. They have a very similar algorithmic structure which consists of matching the distribution of the expert trajectories. To do so, they simultaneously learn the reward and the policy that imitates the expert demonstrations. At each step, sampled trajectories of the current policy and the expert policy are used to produce a reward function. Then, this reward is (partially) optimized to produce an updated policy and so on. In GAIL, the reward is obtained from a network trained to discriminate between expert trajectories and (partial) trajectories sampled from a generator (the policy), which is itself trained by TRPO[14]. In GCL, the reward is obtained by
7
minimization of the Maximum Entropy IRL cost[20] and one could use any RL algorithm procedure (DDPG, TRPO etc.) to optimize this reward. | 1707.08817#27 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 28 | 7
minimization of the Maximum Entropy IRL cost[20] and one could use any RL algorithm procedure (DDPG, TRPO etc.) to optimize this reward.
Control in continuous state-action domains typically uses smooth shaped rewards that are designed to be amenable to classical analysis yielding closed-form solutions. Such requirements might be difï¬cult to meet in real world applications. For instance, iterative Linear Quadratic Gaussian (iLQG) [18] is a method for nonlinear stochastic systems where the dynamics is known and the reward has to be quadratic (and thus entails hand-crafted task designs). It uses iterative linearization of the dynamics around the current trajectory in order to obtain a noisy linear system (where the noise is a centered Gaussian) and where the reward constraints are quadratic. Then the algorithm uses the Ricatti family of equations to obtain locally linear optimal trajectories that improve on the current trajectory. | 1707.08817#28 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 29 | Guided Policy Search [6] aims at ï¬nding an optimal policy by decomposing the problem into three steps. First, it uses nominal or expert trajectories, obtained by previous interactions with the environment to learn locally linear approximations of its dynamics. Then, it uses optimal control algorithms such as iLQG or DDP to ï¬nd the locally linear optimal policies corresponding to these dynamics. Finally, via supervised learning, a neural network is trained to ï¬t the trajectories generated by these policies. Here again, there is a quadratic constraint on the reward that must be purposely shaped.
Normalized Advantage Functions (NAF) [2] with model-based acceleration is a model-free RL algorithm using imagination rollouts coming from a model learned with the previous interactions with the environment or via expert demonstrations. NAF is the natural extension of Q-Learning in the continuous case where the advantage function is parameterized as a quadratic function of non-linear state features. The uni-modal nature of this function allows the maximizing action for the Q-function to be obtained directly as the mean policy. This formulation makes the greedy step of Q-Learning tractable for continuous action domains. Then, similarly as GPS, locally linear approximations of the dynamics of the environment are learned and iLQG is used to produce model-guided rollouts to accelerate learning. | 1707.08817#29 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 30 | The most similar work to ours is DQfD [3], which combines Deep Q Networks (DQN) [8] with learning from demonstrations in a similar way to DDPGfD. It additionally adds a supervised loss to keep the agent close to the policy from the demonstrations. However DQfD is restricted to domains with discrete action spaces and is not applicable to robotics.
# 7 Conclusion
In this paper we presented DDPGfD, an off-policy RL algorithm which uses demonstration trajectories to quickly bootstrap performance on challenging motor tasks speciï¬ed by sparse rewards. DDPGfD utilizes a prioritized replay mechanism to prioritize samples across both demonstration and self- generated agent data. In addition, it incorporates n-step returns to better propagate the sparse rewards across the entire trajectory. | 1707.08817#30 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 31 | Most work on RL in high-dimensional continuous control problems relies on well-tuned shaping rewards both for communicating the goal to the agent as well as easing the exploration problem. While many of these tasks can be deï¬ned by a terminal goal state fairly easily, tuning a proper shaping reward that does not lead to degenerate solutions is very difï¬cult. This task only becomes more difï¬cult when you move to multi-stage tasks such as insertion. In this work, we replaced these difï¬cult to tune shaping reward functions with demonstrations of the task from a human demonstrator. This eases the exploration problem without requiring careful tuning of shaping rewards.
In our experiments we sought to determine whether demonstrations were a viable alternative to shaping rewards for training object insertion tasks. Insertion is an important subclass of object manipulation, with extensive applications in manufacturing. In addition, it is a challenging set of domains for shaping rewards, as it requires two stages: one for reaching the insertion point, and one for inserting the object. Our results suggest that Deep-RL is poised to have a large impact on real robot applications by extending the learning-from-demonstration paradigm to include richer, force-sensitive policies.
8
# References
[1] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proc. of ICML, 2016. | 1707.08817#31 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 32 | [1] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proc. of ICML, 2016.
[2] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In Proc. of ICML, 2016.
[3] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, A. Sendonaris, G. Dulac- Arnold, I. Osband, J. Agapiou, et al. Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017.
[4] J. Ho and S. Ermon. Generative adversarial imitation learning. In Proc. of NIPS, 2016.
[5] E. Klein, B. Piot, M. Geist, and O. Pietquin. A cascaded supervised learning approach to inverse reinforcement learning. In Proc. of ECML, 2013.
[6] S. Levine and V. Koltun. Guided policy search. In Proc. of ICML, pages 1â9, 2013. | 1707.08817#32 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 33 | [6] S. Levine and V. Koltun. Guided policy search. In Proc. of ICML, pages 1â9, 2013.
[7] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In Proc. of ICLR, 2016.
[8] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[9] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proc. of ICML, volume 99, pages 278â287, 1999. | 1707.08817#33 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 34 | [10] D. A. Pomerleau. ALVINN: An autonomous land vehicle in a neural network. In Proc. of NIPS, 1989.
[11] N. Ratliff, J. A. Bagnell, and S. S. Srinivasa. Imitation learning for locomotion and manipulation. In 2007 7th IEEE-RAS International Conference on Humanoid Robots, 2007.
[12] S. Ross, G. J. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proc. of AISTATS, 2011.
[13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In Proc. of ICLR, volume abs/1511.05952, 2016.
[14] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. In Proc. of ICML, 2015.
[15] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press, 1998.
[16] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proc. of NIPS, 2007. | 1707.08817#34 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 35 | [16] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proc. of NIPS, 2007.
[17] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In Proc. of ICML, 2008.
[18] E. Todorov and W. Li. A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings of the 2005, pages 300â306. IEEE, 2005.
[19] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Proc. of IROS, pages 5026â5033, 2012.
[20] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In Proc. of AAAI, pages 1433â1438, 2008.
9
# A Real robot safety | 1707.08817#35 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 36 | 9
# A Real robot safety
To be able to run DDPG on the real robot we needed to ensure that the agent will not apply excessive force. To do this we created an intermediate impedance controller which subjects the agentâs commands to safety constraints before relaying them to the robot. It modiï¬es the target velocity set by the agent according to the externally applied forces.
ucontrol = uagentka + fappliedkf
Where uagent is agentâs control signal, fapplied are externally applied forces such as the clip pushing against the housing, and ka and kf are constants to choose the correct sensitivity. We further limit the velocity control signal ucontrol to limit the maximal speed increase while still allowing the agent to stop quickly. This increases the control stability of the system.
This allowed us to keep the agentâs control frequency, uagent, at 5Hz while still having a physically safe system as fapplied and ucontrol were updated at 1kHz.
# Algorithm 1: DDPG from Demonstrations
Env Environment; 9â initial policy parameters; 6â initial policy target parameters. 6° initial action-value parameters; 2 " initial action-value target parameters; Nâ target network replacement frequency; ⬠action noise. B replay buffer initialized with demonstrations; k number of pre-training gradient updates
# Input Input
# initial policy target parameters. | 1707.08817#36 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.08817 | 37 | Input 6° initial action-value parameters; 2 " initial action-value target parameters; Nâ target network replacement frequency; ⬠action noise. Input: B replay buffer initialized with demonstrations; k number of pre-training gradient updates Output : Q(.|9°) action-value function (critic) and 7(.|97) the policy (actor). /* Learning via interaction with the environment 1 for episode e ⬠{1,..., M}do 2 Initialise state sy ~ Env 3 for steps t ⬠{1,... EpisodeLength} do 4 Sample noise from Gaussian ny = Nâ(0, â¬) 5 Select an action a, = (5:1, 07) + me 6 Get next state and reward s,,r, = T(s:_1, a1), R(s:) 7 Add single step transition (s;â1, a2, r1,Y, St) to the replay buffer 8 Add n-step transition (s;ân, @1â-n41, an Trâ-nti4i7',Y", 8t) to the replay buffer 9 end 10 for steps 1 ⬠{1,... EpisodeLength x LearningSteps} do u Sample a minibatch of with prioritization from D and calculate L; (9%) and Ln (02) as appropriate for | 1707.08817#37 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | [
{
"id": "1704.03732"
}
] |
1707.07402 | 0 | 7 1 0 2
v o N 1 1 ] L C . s c [
4 v 2 0 4 7 0 . 7 0 7 1 : v i X r a
# Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
Khanh Nguyen®® and Hal Daumé III°*°° and Jordan Boyd-Graber°**? University of Maryland: Computer Science®, Language Science®, iSchool*, UMIACS® Microsoft Research, New York? {kxnguyen, hal, jbg}@umiacs.umd.edu
# Abstract
Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training fo- cuses on expensive human-generated ref- erence translations. We describe a re- inforcement learning algorithm that im- proves neural machine translation sys- tems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoder- decoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine transla- tion metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors. | 1707.07402#0 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 1 | A common motivation for this problem setting is cost. In the case of translation, bilingual âex- pertsâ can read a source sentence and a possible translation, and can much more quickly provide a rating of that translation than they can produce a full translation on their own. Furthermore, one can often collect even less expensive ratings from ânon-expertsâ who may or may not be bilingual (Hu et al., 2014). Breaking this reliance on ex- pensive data could unlock previously ignored lan- guages and speed development of broad-coverage machine translation systems.
All work on bandit structured prediction we know makes an important simplifying assumption: the score provided by the world is exactly the score the system must optimize (§2). In the case of pars- ing, the score is attachment score; in the case of machine translation, the score is (sentence-level) BLEU. While this simplifying assumption has been incredibly useful in building algorithms, it is highly unrealistic. Any time we want to optimize a system by collecting user feedback, we must take into account:
# Introduction | 1707.07402#1 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 2 | # Introduction
Bandit structured prediction is the task of learning to solve complex joint prediction problems (like parsing or machine translation) under a very lim- ited feedback model: a system must produce a sin- gle structured output (e.g., translation) and then the world reveals a score that measures how good or bad that output is, but provides neither a âcor- rectâ output nor feedback on any other possible output (Chang et al., 2015; Sokolov et al., 2015). Because of the extreme sparsity of this feedback, a common experimental setup is that one pre-trains a good-but-not-great âreferenceâ system based on whatever labeled data is available, and then seeks to improve it over time using this bandit feedback.
(e.g., expert ratings) may not correlate perfectly with the measure that the reference system was trained on (e.g., BLEU or log likelihood); 2. Human judgments might be more granu- lar than traditional continuous metrics (e.g., thumbs up vs. thumbs down);
3. Human feedback have high variance (e.g., different raters might give different responses given the same system output);
4. Human feedback might be substantially skewed (e.g., a rater may think all system out- puts are poor). | 1707.07402#2 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 3 | 4. Human feedback might be substantially skewed (e.g., a rater may think all system out- puts are poor).
Our ï¬rst contribution is a strategy to simulate ex- pert and non-expert ratings to evaluate the robust- ness of bandit structured prediction algorithms in general, in a more realistic environment (§4). We
construct a family of perturbations to capture three attributes: granularity, variance, and skew. We apply these perturbations on automatically gener- ated scores to simulate noisy human ratings. To make our simulated ratings as realistic as possible, we study recent human evaluation data (Graham et al., 2017) and ï¬t models to match the noise pro- ï¬les in actual human ratings (§4.2). | 1707.07402#3 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 4 | Our second contribution is a reinforcement learning solution to bandit structured prediction and a study of its robustness to these simulated human ratings (§ 3).1 We combine an encoder- decoder architecture of machine translation (Lu- ong et al., 2015) with the advantage actor-critic al- gorithm (Mnih et al., 2016), yielding an approach that is simple to implement but works on low- resource bandit machine translation. Even with substantially restricted granularity, with high vari- ance feedback, or with skewed rewards, this com- bination improves pre-trained models (§6). In par- ticular, under realistic settings of our noise param- eters, the algorithmâs online reward and ï¬nal held- out accuracies do not signiï¬cantly degrade from a noise-free setting.
# 2 Bandit Machine Translation
The bandit structured prediction problem (Chang et al., 2015; Sokolov et al., 2015) is an extension of the contextual bandits problem (Kakade et al., 2008; Langford and Zhang, 2008) to structured prediction. Bandit structured prediction operates over time i = 1 . . . K as: | 1707.07402#4 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 5 | 1. World reveals context x(i) 2. Algorithm predicts structured output Ëy(i) 3. World reveals reward R We consider the problem of learning to trans- late from human ratings in a bandit structured prediction framework. In each round, a transla- tion model receives a source sentence x(i), pro- duces a translation Ëy(i), and receives a rating R from a human that reï¬ects the qual- ity of the translation. We seek an algorithm that achieves high reward over K rounds (high cumu- lative reward). The challenge is that even though the model knows how good the translation is, it knows neither where its mistakes are nor what the âcorrectâ translation looks like. It must bal- ance exploration (ï¬nding new good predictions)
1Our https://github.com/ is khanhptnk/bandit-nmt (in PyTorch). code at
Heftiges Gewitter tiber der Hauptstadt. Heavy thunderstorm over the capital. 3% - Hide original « Rate this translation Rate this translation a Click a star to rate
Figure 1: A translation rating interface provided by Facebook. Users see a sentence followed by its machined-generated translation and can give rat- ings from one to ï¬ve stars. | 1707.07402#5 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 6 | Figure 1: A translation rating interface provided by Facebook. Users see a sentence followed by its machined-generated translation and can give rat- ings from one to ï¬ve stars.
with exploitation (producing predictions it already knows are good). This is especially difï¬cult in a task like machine translation, where, for a twenty token sentence with a vocabulary size of 50k, there are approximately 1094 possible outputs, of which the algorithm gets to test exactly one.
Despite these challenges, learning from non- expert ratings is desirable. In real-world scenar- ios, non-expert ratings are easy to collect but other stronger forms of feedback are prohibitively ex- pensive. Platforms that offer translations can get quick feedback âfor freeâ from their users to im- prove their systems (Figure 1). Even in a setting in which annotators are paid, it is much less expen- sive to ask a bilingual speaker to provide a rating of a proposed translation than it is to pay a profes- sional translator to produce one from scratch.
# 3 Effective Algorithm for Bandit MT | 1707.07402#6 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 7 | # 3 Effective Algorithm for Bandit MT
This section describes the neural machine trans- lation architecture of our system (§ 3.1). We for- mulate bandit neural machine translation as a re- inforcement learning problem (§ 3.2) and discuss why standard actor-critic algorithms struggle with this problem (§ 3.3). Finally, we describe a more effective training approach based on the advantage actor-critic algorithm (§3.4).
# 3.1 Neural machine translation
Our neural machine translation (NMT) model is a neural encoder-decoder that directly computes the probability of translating a target sentence y = (y1, · · · , ym) from source sentence x:
m Polylx)=[[ Pou lyce) t=1
where Pθ(yt | y<t, x) is the probability of out- putting the next word yt at time step t given a translation preï¬x y<t and a source sentence x.
We use an encoder-decoder NMT architecture with global attention (Luong et al., 2015), where both the encoder and decoder are recurrent neu- ral networks (RNN) (see Appendix A for a more detailed description). These models are normally trained by supervised learning, but as reference translations are not available in our setting, we use reinforcement learning methods, which only require numerical feedback to function. | 1707.07402#7 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 8 | # 3.2 Bandit NMT as Reinforcement Learning
NMT generating process can be viewed as a Markov decision process on a continuous state space. The states are the hidden vectors hdec gen- erated by the decoder. The action space is the tar- get languageâs vocabulary.
To generate a translation from a source sentence x, an NMT model starts at an initial state hdec 0 : a representation of x computed by the encoder. At time step t, the model decides the next ac- tion to take by deï¬ning a stochastic policy Pθ(yt | y<t, x), which is directly parametrized by the pa- rameters θ of the model. This policy takes the cur- rent state hdec tâ1 as input and produces a probabil- ity distribution over all actions (target vocabulary words). The next action Ëyt is chosen by taking arg max or sampling from this distribution. The model computes the next state hdec by updating the current state hdec
The objective of bandit NMT is to ï¬nd a policy that maximizes the expected reward of translations sampled from the modelâs policy:
max θ Lpg(θ) = max θ E xâ¼Dtr Ëyâ¼Pθ(·|x) (2) | 1707.07402#8 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 9 | max θ Lpg(θ) = max θ E xâ¼Dtr Ëyâ¼Pθ(·|x) (2)
where Dtr is the training set and R is the reward function (the rater).2 We optimize this objective function with policy gradient methods. For a ï¬xed x, the gradient of the objective in Eq 2 is: âθLpg(θ) = EËyâ¼Pθ(·) [R(Ëy)âθ log Pθ(Ëy)]
VoLyg(O) = Eg~r,.) [R(Y)Vo log Po(H)| (3) = Bywroc)| D> OG rH) VoPo (Gi | 1)| t=1 yt
where Q(Ëy<t, Ëyt) is the expected future reward of Ëyt given the current preï¬x Ëy<t, then continuing sampling from Pθ to complete the translation:
[RG 2)| {Hy =a. H
QGer- Hi) = Ey rgcja) [RG 2)|
with R(gâ,x) = RWâ, a) {Hy =a. H =H} | 1707.07402#9 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 10 | QGer- Hi) = Ey rgcja) [RG 2)|
with R(gâ,x) = RWâ, a) {Hy =a. H =H}
2Our raters are stochastic, but for simplicity we denote the reward as a function; it should be expected reward.
1{·} is the indicator function, which returns 1 if the logic inside the bracket is true and returns 0 otherwise.
The gradient in Eq 3 requires rating all possible translations, which is not feasible in bandit NMT. Na¨ıve Monte Carlo reinforcement learning meth- ods such as REINFORCE (Williams, 1992) esti- mates Q values by sample means but yields very high variance when the action space is large, lead- ing to training instability.
# 3.3 Why are actor-critic algorithms not effective for bandit NMT? | 1707.07402#10 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 11 | # 3.3 Why are actor-critic algorithms not effective for bandit NMT?
Reinforcement learning methods that rely on func- tion approximation are preferred when tackling bandit structured prediction with a large action space because they can capture similarities be- tween structures and generalize to unseen regions The actor-critic algo- of the structure space. rithm (Konda and Tsitsiklis) uses function approx- imation to directly model the Q function, called the critic model. In our early attempts on ban- dit NMT, we adapted the actor-critic algorithm for NMT in Bahdanau et al. (2017), which em- ploys the algorithm in a supervised learning set- ting. Speciï¬cally, while an encoder-decoder critic model QÏ as a substitute for the true Q function in Eq 3 enables taking the full sum inside the ex- pectation (because the critic model can be queried with any state-action pair), we are unable to obtain reasonable results with this approach. | 1707.07402#11 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 12 | Nevertheless, insights into why this approach fails on our problem explains the effectiveness of the approach discussed in the next section. There are two properties in Bahdanau et al. (2017) that our problem lacks but are key elements for a suc- cessful actor-critic. The ï¬rst is access to refer- ence translations: while the critic model is able to observe reference translations during training in their setting, bandit NMT assumes those are never available. The second is per-step rewards: while the reward function in their setting is known and can be exploited to compute immediate rewards after taking each action, in bandit NMT, the actor- critic algorithm struggles with credit assignment because it only receives reward when a translation is completed. Bahdanau et al. (2017) report that the algorithm degrades if rewards are delayed un- til the end, consistent with our observations.
With an enormous action space of bandit NMT, approximating gradients with the Q critic model | 1707.07402#12 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 13 | With an enormous action space of bandit NMT, approximating gradients with the Q critic model
induces biases and potentially drives the model to wrong optima. Values of rarely taken actions are often overestimated without an explicit constraint between Q values of actions (e.g., a sum-to-one constraint). Bahdanau et al. (2017) add an ad-hoc regularization term to the loss function to mitigate this issue and further stablizes the algorithm with a delay update scheme, but at the same time intro- duces extra tuning hyper-parameters.
# 3.4 Advantage Actor-Critic for Bandit NMT
We follow the approach of advantage actor- critic (Mnih et al., 2016, A2C) and combine it with the neural encoder-decoder architecture. The resulting algorithmâwhich we call NED-A2Câ approximates the gradient in Eq 3 by a single sample Ëy â¼ P (· | Ëx) and centers the reward R(Ëy) using the state-speciï¬c expected future re- ward V (Ëy<t) to reduce variance:
Me Volyg(O) = > Ri(G)Vo log Po (Gt | Ger) t Il an
with Ri(G) = R(G) â VG) = VY) ~PClier) (QG< H)] | 1707.07402#13 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 14 | with Ri(G) = R(G) â VG) = VY) ~PClier) (QG< H)]
We train a separate attention-based encoder- decoder model V., to estimate V values. This model encodes a source sentence x and decodes a sampled translation y. At time step ¢, it computes Vu(Ge,,2) = wl he", where hf" is the current decoderâs hidden vector and w is a learned weight vector. The critic model minimizes the MSE be- tween its estimates and the true values:
Lert(w w)= a~Dy » Lif (gy, *) (6) â Fatty dt with L(G, x) = [Vo(Ger,@) â RG, 2)â.
We use a gradient approximation to update Ï for a ï¬xed x and Ëy â¼ P (· | Ëx):
m S> Veer) â RID] VoVo(Per) t=1 (7) Vulert(w w) | 1707.07402#14 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 15 | m S> Veer) â RID] VoVo(Per) t=1 (7) Vulert(w w)
(7) NED-A2C is better suited for problems with a large action space and has other advantages over actor-critic. For large action spaces, approximat- ing gradients using the V critic model induces lower biases than using the Q critic model. As implied by its deï¬nition, the V model is robust to
(5)
biases incurred by rarely taken actions since re- wards of those actions are weighted by very small probabilities in the expectation. In addition, the V model has a much smaller number of param- eters and thus is more sample-efï¬cient and more stable to train than the Q model. These attractive properties were not studied in A2Câs original pa- per (Mnih et al., 2016).
# Algorithm 1 The NED-A2C algorithm for bandit NMT.
1: for i = 1 · · · K do 2: 3: 4: 5: 6: 7: end for
receive a source sentence x(i) sample a translation: Ëy(i) â¼ Pθ(· | x(i)) receive reward R(Ëy(i), x(i)) update the NMT model using Eq 5. update the critic model using Eq 7. | 1707.07402#15 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 16 | Algorithm 1 summarizes NED-A2C for bandit NMT. For each x, we draw a single sample Ëy from the NMT model, which is used for both estimat- ing gradients of the NMT model and the critic model. We run this algorithm with mini-batches of x and aggregate gradients over all x in a mini- batch for each update. Although our focus is on bandit NMT, this algorithm naturally works with any bandit structured prediction problem.
# 4 Modeling Imperfect Ratings | 1707.07402#16 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 17 | # 4 Modeling Imperfect Ratings
Our goal is to establish the feasibility of using real human feedback to optimize a machine translation system, in a setting where one can collect expert feedback as well as a setting in which one only collects non-expert feedback. In all cases, we con- sider the expert feedback to be the âgold standardâ that we wish to optimize. To establish the fea- sibility of driving learning from human feedback without doing a full, costly user study, we begin with a simulation study. The key aspects (Fig- ure 2) of human feedback we capture are: (a) mis- match between training objective and feedback- maximizing objective, (b) human ratings typically are binned (§ 4.1), (c) individual human ratings have high variance (§4.2), and (d) non-expert rat- ings can be skewed with respect to expert ratings (§4.3).
In our simulated study, we begin by model- ing gold standard human ratings using add-one- smoothed sentence-level BLEU (Chen and Cherry, 2014).3 Our evaluation criteria, therefore, is av3âSmoothing 2â in Chen and Cherry (2014). We also add one to lengths when computing the brevity penalty. | 1707.07402#17 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 18 | Original Perturbed Ey gl 2 Lis =} 5 LN g= : ch a Lx o [S} - (EN - ââ 3 2 LA e=5 a
Figure 2: Examples of how our perturbation func- tions change the âtrueâ feedback distribution (left) to ones that better capture features found in human feedback (right).
erage sentence-BLEU over the run of our algo- rithm. However, in any realistic scenario, human feedback will vary from its average, and so the reward that our algorithm receives will be a per- turbed variant of sentence-BLEU. In particular, if the sentence-BLEU score is s ⬠[0,1], the algo- rithm will only observe sâ ~ pert(s), where pert is a perturbation distribution. Because our ref- erence machine translation system is pre-trained using log-likelihood, there is already an (a) mis- match between training objective and feedback, so we focus on (b-d) below.
# 4.1 Humans Provide Granular Feedback | 1707.07402#18 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 19 | # 4.1 Humans Provide Granular Feedback
When collecting human feedback, it is often more effective to collect discrete binned scores. A clas- sic example is the Likert scale for human agree- ment (Likert, 1932) or star ratings for product re- views. Insisting that human judges provide con- tinuous values (or feedback at too ï¬ne a granular- ity) can demotivate raters without improving rat- ing quality (Preston and Colman, 2000).
To model granular feedback, we use a simple rounding procedure. Given an integer parameter g for degree of granularity, we deï¬ne:
pertgran(s; g) = 1 g round(gs) (8)
This perturbation function divides the range of
possible outputs into g + 1 bins. For ex- for g = 5, we obtain bins [0, 0.1), ample, [0.1, 0.3), [0.7, 0.9) and [0.5, 0.7), [0.9, 1.0]. Since most sentence-BLEU scores are much closer to zero than to one, many of the larger bins are frequently vacant.
# 4.2 Experts Have High Variance | 1707.07402#19 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 20 | # 4.2 Experts Have High Variance
Human feedback has high variance around its ex- pected value. A natural goal for a variance model of human annotators is to simulateâas closely as possibleâhow human raters actually perform. We use human evaluation data recently collected as part of the WMT shared task (Graham et al., 2017). The data consist of 7200 sentences mul- tiply annotated by giving non-expert annotators on Amazon Mechanical Turk a reference sentence and a single system translation, and asking the raters to judge the adequacy of the translation.4 | 1707.07402#20 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 21 | From these data, we treat the average human rating as the ground truth and consider how in- dividual human ratings vary around that mean. To visualize these results with kernel density es- timates (standard normal kernels) of the standard deviation. Figure 3 shows the mean rating (x-axis) and the deviation of the human ratings (y-axis) at each mean.° As expected, the standard deviation is small at the extremes and large in the middle (this is a bounded interval), with a fairly large range in the middle: a translation whose average score is 50 can get human evaluation scores anywhere be- tween 20 and 80 with high probability. We use a linear approximation to define our variance-based perturbation function as a Gaussian distribution, which is parameterized by a scale \ that grows or shrinks the variances (when \ = 1 this exactly matches the variance in the plot). pertâ""(s; 4) = Nor (s, Ao(s)*) (9)
Ï(s) = 0.64s â 0.02 if s < 50 â0.67s + 67.0 otherwise
# 4.3 Non-Experts are Skewed from Experts
The preceding two noise models assume that the reward closely models the value we want to op- timize (has the same mean). This may not be
4Typical machine translation evaluations evaluate pairs and ask annotators to choose which is better. | 1707.07402#21 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 22 | 4Typical machine translation evaluations evaluate pairs and ask annotators to choose which is better.
5A current limitation of this model is that the simu- lated noise is i.i.d. conditioned on the rating (homoscedas- tic noise). While this is a stronger and more realistic model than assuming no noise, real noise is likely heteroscedastic: dependent on the input.
â stddev sateen linear fit left seers linear fit right stddev of human rating 0 20 40 60 80 100 sentence-level avg rating
Figure 3: Average rating (x-axis) versus a kernel density estimate of the variance of human ratings around that mean, with linear ï¬ts. Human scores vary more around middling judgments than ex- treme judgments.
De-En Zh-En Supervised training Bandit training Development Test 186K 167K 7.7K 9.1K 190K 165K 7.9K 7.4K
Table 1: Sentence counts in data sets. | 1707.07402#22 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 23 | Table 1: Sentence counts in data sets.
the case with non-expert ratings. Non-expert raters are skewed both for reinforcement learn- ing (Thomaz et al., 2006; Thomaz and Breazeal, 2008; Loftin et al., 2014) and recommender sys- tems (Herlocker et al., 2000; Adomavicius and Zhang, 2012), but are typically bimodal: some are harsh (typically provide very low scores, even for âokayâ outputs) and some are motivational (pro- viding high scores for âokayâ outputs).
We can model both harsh and motivations raters with a simple deterministic skew perturbation function, parametrized by a scalar Ï â [0, â):
pertskew(s; Ï) = sÏ (10)
For Ï > 1, the rater is harsh; for Ï < 1, the rater is motivational.
# 5 Experimental Setup | 1707.07402#23 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 24 | For Ï > 1, the rater is harsh; for Ï < 1, the rater is motivational.
# 5 Experimental Setup
We choose two language pairs from differ- ent language families with different typological properties: German-to-English and (De-En) and Chinese-to-English (Zh-En). We use parallel tran- scriptions of TED talks for these pairs of lan- guages from the machine translation track of the IWSLT 2014 and 2015 (Cettolo et al., 2014, 2015, 2012). For each language pair, we split its data into four sets for supervised training, bandit train- ing, development and testing (Table 1). For English and German, we tokenize and clean sen- tences using Moses (Koehn et al., 2007). For Chi- nese, we use the Stanford Chinese word segmenter (Chang et al., 2008) to segment sentences and tok- enize. We remove all sentences with length greater than 50, resulting in an average sentence length of 18. We use IWSLT 2015 data for supervised train- ing and development, IWSLT 2014 data for ban- dit training and previous yearsâ development and evaluation data for testing.6
# 5.1 Evaluation Framework | 1707.07402#24 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 25 | For each task, we ï¬rst use the supervised train- ing set to pre-train a reference NMT model us- ing supervised learning. On the same training set, we also pre-train the critic model with translations sampled from the pre-trained NMT model. Next, we enter a bandit learning mode where our mod- els only observe the source sentences of the bandit training set. Unless speciï¬ed differently, we train the NMT models with NED-A2C for one pass over the bandit training set. If a perturbation function is applied to Per-Sentence BLEU scores, it is only applied in this stage, not in the pre-training stage. We measure the improvement âS of an eval- uation metric S due to bandit training: âS = SA2C â Sref , where Sref is the metric computed on the reference models and SA2C is the metric computed on models trained with NED-A2C. Our primary interest is Per-Sentence BLEU: average sentence-level BLEU of translations that are sam- pled and scored during the bandit learning pass. This metric represents average expert ratings, which we want to optimize for in real-world sce- narios. We also measure | 1707.07402#25 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 26 | during the bandit learning pass. This metric represents average expert ratings, which we want to optimize for in real-world sce- narios. We also measure Heldout BLEU: corpus- level BLEU on an unseen test set, where transla- tions are greedily decoded by the NMT models. This shows how much our method improves trans- lation quality, since corpus-level BLEU correlates better with human judgments than sentence-level BLEU. | 1707.07402#26 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 27 | Because of randomness due to both the random sampling in the model for âexplorationâ as well as the randomness in the reward function, we repeat each experiment ï¬ve times and report the mean re- sults with 95% conï¬dence intervals.
6Over 95% of the bandit learning setâs sentences are seen during supervised learning. Performance gain on this set mainly reï¬ects how well a model leverages weak learn- ing signals (ratings) to improve previously made predictions. Generalizability is measured by performance gain on the test sets, which do not overlap the training sets.
# 5.2 Model conï¬guration | 1707.07402#27 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 28 | Both the NMT model and the critic model are encoder-decoder models with global atten- tion (Luong et al., 2015). The encoder and the decoder are unidirectional single-layer LSTMs. They have the same word embedding size and LSTM hidden size of 500. The source and tar- get vocabulary sizes are both 50K. We do not use dropout in our experiments. We train our mod- els by the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.999 and a batch size of 64. For Adamâs α hyperparameter, we use 10â3 dur- ing pre-training and 10â4 during bandit learning (for both the NMT model and the critic model). During pre-training, starting from the ï¬fth pass, we decay α by a factor of 0.5 when perplexity on the development set increases. The NMT model reaches its highest corpus-level BLEU on the de- velopment set after ten passes through the super- vised training data, while the critic modelâs train- ing error stabilizes after ï¬ve passes. The train- ing speed is 18s/batch for supervised pre-training and 41s/batch for training with the NED-A2C al- gorithm.
# 6 Results and Analysis | 1707.07402#28 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 29 | # 6 Results and Analysis
In this section, we describe the results of our ex- periments, broken into the following questions: how NED-A2C improves reference models (§6.1); the effect the three perturbation functions have on the algorithm (§ 6.2); and whether the algorithm improves a corpus-level metric that corresponds well with human judgments (§6.3).
# 6.1 Effectiveness of NED-A2C under Un-perturbed Bandit Feedback
We evaluate our method in an ideal setting where un-perturbed Per-Sentence BLEU simulates rat- ings during both training and evaluation (Table 2).
Single round of feedback. In this setting, our models only observe each source sentence once and before producing its translation. On both De- En and Zh-En, NED-A2C improves Per-Sentence BLEU of reference models after only a single pass (+2.82 and +1.08 respectively).
Poor initialization. Policy gradient algorithms have difï¬culty improving from poor initializa- tions, especially on problems with a large ac- tion space, because they use model-based explo- ration, which is ineffective when most actions
45° 40° Training Per-sentence BLEU 0 300000 600000 900000 Number of sentences
Figure 4: Learning curves of models trained with NED-A2C for ï¬ve epochs. | 1707.07402#29 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 30 | Figure 4: Learning curves of models trained with NED-A2C for ï¬ve epochs.
have equal probabilities (Bahdanau et al., 2017; Ranzato et al., 2016). To see whether NED-A2C has this problem, we repeat the experiment with the same setup but with reference models pre- trained for only a single pass. Surprisingly, NED- A2C is highly effective at improving these poorly trained models (+7.07 on De-En and +3.60 on Zh- En in Per-Sentence BLEU). | 1707.07402#30 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 31 | Comparisons with supervised learning. To further demonstrate the effectiveness of NED- A2C, we compare it with training the reference models with supervised learning for a single pass on the bandit training set. Surprisingly, observ- ing ground-truth translations barely improves the models in Per-Sentence BLEU when they are fully trained (less than +0.4 on both tasks). A possi- ble explanation is that the models have already reached full capacity and do not beneï¬t from more examples.7 NED-A2C further enhances the mod- els because it eliminates the mismatch between the supervised training objective and the evalua- tion objective. On weakly trained reference mod- els, NED-A2C also signiï¬cantly outperforms su- pervised learning (âPer-Sentence BLEU of NED- A2C is over three times as large as those of super- vised learning).
Multiple rounds of feedback. We examine if NED-A2C can improve the models even further
7This result may vary if the domains of the supervised learning set and the bandit training set are dissimilar. Our training data are all TED talks. | 1707.07402#31 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 32 | 7This result may vary if the domains of the supervised learning set and the bandit training set are dissimilar. Our training data are all TED talks.
Reference De-En âsup âA2C Reference Zh-En âsup âA2C Fully pre-trained reference model Per-Sentence BLEU Heldout BLEU 38.26 ± 0.02 24.94 ± 0.00 0.07 ± 0.05 1.48 ± 0.00 2.82 ± 0.03 1.82 ± 0.08 32.79 ± 0.01 13.73 ± 0.00 0.36 ± 0.05 1.18 ± 0.00 1.08 ± 0.03 0.86 ± 0.11 Weakly pre-trained reference model Per-Sentence BLEU Heldout BLEU 19.15 ± 0.01 19.63 ± 0.00 2.94 ± 0.02 3.94 ± 0.00 7.07 ± 0.06 1.61 ± 0.17 14.77 ± 0.01 9.34 ± 0.00 1.11 ± 0.02 2.31 ± 0.00 3.60 ± 0.04 0.92 ± 0.13 | 1707.07402#32 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 33 | Table 2: Translation scores and improvements based on a single round of un-perturbed bandit feedback. Per-Sentence BLEU and Heldout BLEU are not comparable: the former is sentence-BLEU, the latter is corpus-BLEU.
with multiple rounds of feedback.8 With super- vised learning, the models can memorize the ref- erence translations but, the mod- els have to be able to exploit and explore effec- tively. We train the models with NED-A2C for ï¬ve passes and observe a much more signiï¬cant âPer- Sentence BLEU than training for a single pass in both pairs of language (+6.73 on De-En and +4.56 on Zh-En) (Figure 4).
# 6.2 Effect of Perturbed Bandit Feedback
We apply perturbation functions deï¬ned in § 4.1 to Per-Sentence BLEU scores and use the per- turbed scores as rewards during bandit training (Figure 5). | 1707.07402#33 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 34 | Granular Rewards. We discretize raw Per- Sentence BLEU scores using pertgran(s; g) (§4.1). We vary g from one to ten (number of bins varies from two to eleven). Compared to continuous re- wards, for both pairs of languages, âPer-Sentence BLEU is not affected with g at least ï¬ve (at least six bins). As granularity decreases, âPer- Sentence BLEU monotonically degrades. How- ever, even when g = 1 (scores are either 0 or 1), the models still improve by at least a point.
High-variance simulate noisy rewards using the model of human pertvar(s; λ) (§ 4.2) with rating λ â {0.1, 0.2, 0.5, 1, 2, 5}. Our models can withstand an amount of about 20% the variance in our human eval data without dropping in âPer- Sentence BLEU. When the amount of variance attains 100%, matching the amount of variance in the human data, âPer-Sentence BLEU go down
8The ability to receive feedback on the same example multiple times might not ï¬t all use cases though. | 1707.07402#34 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 35 | 8The ability to receive feedback on the same example multiple times might not ï¬t all use cases though.
by about 30% for both pairs of languages. As the models degrade more variance is injected, quickly but still improve from the pre-trained models. Variance is the most detrimental type of perturbation to NED-A2C among the three aspects of human ratings we model.
Skewed skewed pertskew(s; Ï) 4.3) with raters Ï â {0.25, 0.5, 0.67, 1, 1.5, 2, 4}. NED-A2C is robust to skewed scores. âPer-Sentence BLEU is at least 90% of unskewed scores for most skew values. Only when the scores are extremely harsh (Ï = 4) does âPer-Sentence BLEU degrade sig- niï¬cantly (most dramatically by 35% on Zh-En). At that degree of skew, a score of 0.3 is suppressed to be less than 0.08, giving little signal for the models to learn from. On the other spectrum, the models are less sensitive to motivating scores as Per-Sentence BLEU is unaffected on Zh-En and only decreases by 7% on De-En.
# 6.3 Held-out Translation Quality | 1707.07402#35 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 36 | # 6.3 Held-out Translation Quality
Our method also improves pre-trained models in Heldout BLEU, a metric that correlates with trans- lation quality better than Per-Sentence BLEU (Ta- ble 2). When scores are perturbed by our rating model, we observe similar patterns as with Per- Sentence BLEU: the models are robust to most perturbations except when scores are very coarse, or very harsh, or have very high variance (Fig- ure 5, second row). Supervised learning improves Heldout BLEU better, possibly because maximiz- ing log-likelihood of reference translations cor- relates more strongly with maximizing Heldout BLEU of predicted translations than maximizing Per-Sentence BLEU of predicted translations.
(a) Granularity (b) Variance (c) Skew (d) Granularity (e) Variance (f) Skew
Figure 5: Performance gains of NMT models trained with NED-A2C in Per-Sentence BLEU (top row) and in Heldout BLEU (bottom row) under various degrees of granularity, variance, and skew of scores. Performance gains of models trained with un-perturbed scores are within the shaded regions.
# 7 Related Work and Discussion | 1707.07402#36 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 37 | Ratings provided by humans can be used as effec- tive learning signals for machines. Reinforcement learning has become the de facto standard for in- corporating this feedback across diverse tasks such as robot voice control (Tenorio-Gonzalez et al., 2010), myoelectric control (Pilarski et al., 2011), and virtual assistants (Isbell et al., 2001). Re- this learning framework has been com- cently, bined with recurrent neural networks to solve ma- chine translation (Bahdanau et al., 2017), dialogue generation (Li et al., 2016), neural architecture search (Zoph and Le, 2017), and device place- ment (Mirhoseini et al., 2017). Other approaches to more general structured prediction under ban- dit feedback (Chang et al., 2015; Sokolov et al., 2016a,b) show the broader efï¬cacy of this frame- work. Ranzato et al. (2016) describe MIXER for training neural encoder-decoder models, which is a reinforcement learning approach closely related to ours but requires a policy-mixing strategy and only uses a linear critic model. Among work on bandit MT, ours is closest to Kreutzer et | 1707.07402#37 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 38 | related to ours but requires a policy-mixing strategy and only uses a linear critic model. Among work on bandit MT, ours is closest to Kreutzer et al. (2017), which also tackle this problem using neu- ral encoder-decoder models, but we (a) take ad- vantage of a state-of-the-art reinforcement learn- ing method; (b) devise a strategy to simulate noisy rewards; and (c) demonstrate the robustness of our method on noisy simulated rewards. | 1707.07402#38 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 39 | Our results show that bandit feedback can be an effective feedback mechanism for neural ma- chine translation systems. This is despite that er- rors in human annotations hurt machine learning
models in many NLP tasks (Snow et al., 2008). An obvious question is whether we could extend our framework to model individual annotator prefer- ences (Passonneau and Carpenter, 2014) or learn personalized models (Mirkin et al., 2015; Rabi- novich et al., 2017), and handle heteroscedastic noise (Park, 1966; Kersting et al., 2007; Antos et al., 2010). Another direction is to apply active learning techniques to reduce the sample complex- ity required to improve the systems or to extend to richer action spaces for problems like simulta- neous translation, which requires prediction (Gris- som II et al., 2014) and reordering (He et al., 2015) among other strategies to both minimize delay and effectively translate a sentence (He et al., 2016).
# Acknowledgements | 1707.07402#39 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 40 | # Acknowledgements
Many thanks to Yvette Graham for her help with the WMT human evaluations data. We thank UMD CLIP lab members for useful discussions that led to the ideas of this paper. We also thank the anonymous reviewers for their thorough and insightful comments. This work was supported by NSF grants IIS-1320538. Boyd-Graber is also par- tially supported by NSF grants IIS- 1409287, IIS- 1564275, IIS-IIS-1652666, and NCSE-1422492. Daum´e III is also supported by NSF grant IIS- 1618193, as well as an Amazon Research Award. Any opinions, ï¬ndings, conclusions, or recom- mendations expressed here are those of the authors and do not necessarily reï¬ect the view of the spon- sor(s).
# A Neural MT Architecture
Our neural machine translation (NMT) model con- sists of an encoder and a decoder, each of which is a recurrent neural network (RNN). We closely follow (Luong et al., 2015) for the structure of our model. It directly models the posterior dis- tribution Pθ(y | x) of translating a source sen- tence x = (x1, · · · , xn) to a target sentence y = (y1, · · · , ym): | 1707.07402#40 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 41 | m Po(y|a)=[J[ Pou lyce) ab t=1
where y<t are all tokens in the target sentence prior to yt.
Each local disitribution Pθ(yt | y<t, x) is mod- eled as a multinomial distribution over the target languageâs vocabulary. We compute this distribu- tion by applying a linear transformation followed by a softmax function on the decoderâs output vec- tor hdec
Pθ(yt | y<t, x) = softmax(W s hdec
# t
(12)
dec t = tanh(W o[Ëh hdec ; ct]) t dec enc 1:n , Ëh ct = attend(Ëh t
(13)
(14)
where [·; ·] is the concatenation of two vectors, enc attend(·, ·) is an attention mechanism, Ëh 1:n are all dec encoderâs hidden vectors and Ëh is the decoderâs t hidden vector at time step t. We use the âgeneralâ global attention in (Luong et al., 2015). | 1707.07402#41 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 42 | During training, the encoder ï¬rst encodes x to a continuous vector Φ(x), which is used as the ini- tial hidden vector for the decoder. In our paper, Φ(x) simply returns the last hidden vector of the encoder. The decoder performs RNN updates to produce a sequence of hidden vectors:
x dec ho = (x) ii = fo(iits.[aleisetwo])
where e(.) is a word embedding lookup function and yt is the ground-truth token at time step t. Feeding the output vector hdec tâ1 to the next step is known as âinput feedingâ.
At prediction time, the ground-truth token yt in Eq 15 is replaced by the modelâs own prediction Ëyt:
Ëyt = arg max y Pθ(y | Ëy<t, x) (16)
In a supervised learning framework, an NMT model is typically trained under the maximum log- likelihood objective:
max θ Lsup(θ) = max θ E(x,y)â¼Dtr [log Pθ (y | x)] (17)
where Dtr is the training set. However, this learn- ing framework is not applicable to bandit learning since ground-truth translations are not available.
# References | 1707.07402#42 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 43 | where Dtr is the training set. However, this learn- ing framework is not applicable to bandit learning since ground-truth translations are not available.
# References
Gediminas Adomavicius and Jingjing Zhang. 2012. Impact of data characteristics on recommender sys- tems performance. ACM Transactions on Manage- ment Information Systems (TMIS) 3(1):3.
Andr´as Antos, Varun Grover, and Csaba Szepesv´ari. 2010. Active learning in heteroscedastic noise. The- oretical Computer Science 411(29-30):2712â2728.
Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In International Conference on Learning Representations (ICLR).
Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In Conference of the European As- sociation for Machine Translation (EAMT). Trento, Italy. | 1707.07402#43 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 44 | Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, Roldano Cattoni, and Marcello Federico. 2015. The IWSLT 2015 evaluation campaign. In In- ternational Workshop on Spoken Language Transla- tion (IWSLT).
Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign, IWSLT In International Workshop on Spoken Lan- 2014. guage Translation (IWSLT).
Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agar- wal, Hal Daum´e III, and John Langford. 2015. Learning to search better than your teacher. In Pro- ceedings of the International Conference on Ma- chine Learning (ICML).
Pi-Chuan Chang, Michel Galley, and Chris Manning. 2008. Optimizing chinese word segmentation for In Workshop on machine translation performance. Machine Translation.
Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- In Association for Computational Lin- level bleu. guistics (ACL). | 1707.07402#44 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 45 | Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- In Association for Computational Lin- level bleu. guistics (ACL).
Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2017. Can machine translation sys- tems be evaluated by the crowd alone. Natural Lan- guage Engineering 23(1):3â30.
Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Donât until the ï¬nal verb wait: Reinforcement learning for simulta- neous machine translation. In Empirical Methods in Natural Language Processing (EMNLP).
He He, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Interpretese vs. translationese: The unique- ness of human strategies in simultaneous interpreta- In Conference of the North American Chap- tion. ter of the Association for Computational Linguistics (NAACL).
He He, Alvin Grissom II, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Syntax-based rewriting for simul- taneous machine translation. In Empirical Methods in Natural Language Processing (EMNLP). | 1707.07402#45 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 46 | Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative ï¬ltering rec- In ACM Conference on Computer ommendations. Supported Cooperative Work.
Chang Hu, Philip Resnik, and Benjamin B Beder- son. 2014. Crowdsourced monolingual translation. ACM Transactions on Computer-Human Interaction (TOCHI) 21(4):22.
Charles Isbell, Christian R Shelton, Michael Kearns, Satinder Singh, and Peter Stone. 2001. A social re- In International Con- inforcement learning agent. ference on Autonomous Agents (AA).
Sham M Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. 2008. Efï¬cient bandit algorithms for online In International Conference multiclass prediction. on Machine learning (ICML).
Kristian Kersting, Christian Plagemann, Patrick Pfaff, likely het- and Wolfram Burgard. 2007. Most In Inter- eroscedastic gaussian process regression. national Conference on Machine Learning (ICML).
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). | 1707.07402#46 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 47 | Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Asso- ciation for Computational Linguistics (ACL).
Vijay R Konda and John N Tsitsiklis. ???? Actor-critic algorithms. In Advances in Neural Information Pro- cessing Systems (NIPS).
Julia Kreutzer, Artem Sokolov, and Stefan Riezler. Bandit structured prediction for neural In Association of 2017. sequence-to-sequence learning. Computational Linguistics (ACL).
John Langford and Tong Zhang. 2008. The epoch- greedy algorithm for multi-armed bandits with side In Advances in Neural Information information. Processing Systems (NIPS).
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep re- inforcement learning for dialogue generation. In Empirical Methods in Natural Language Processing (EMNLP). | 1707.07402#47 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 48 | Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology 22(140):1â55.
Robert Loftin, James MacGlashan, Michael L Littman, Matthew E Taylor, and David L Roberts. 2014. A strategy-aware technique for learning behaviors from discrete human feedback. Technical report, North Carolina State University. Dept. of Computer Science.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- In Empir- based neural machine translation. ical Methods in Natural Language Processing (EMNLP).
Azalia Mirhoseini, Hieu Pham, Quoc V Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Ku- mar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. 2017. Device placement optimization with reinforcement learning. In International Conference on Machine Learning (ICML).
Shachar Mirkin, Scott Nowson, Caroline Brun, and Julien Perez. 2015. Motivating personality-aware In The 2015 Conference on machine translation. Empirical Methods on Natural Language Process- ing (EMNLP). | 1707.07402#48 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 49 | Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforce- ment learning. In International Conference on Ma- chine Learning (ICML).
Rolla E Park. 1966. Estimation with heteroscedastic error terms. Econometrica 34(4):888.
Rebecca J Passonneau and Bob Carpenter. 2014. The beneï¬ts of a model of annotation. Transactions of the Association for Computational Linguistics (TACL) 2:311â326.
Patrick M Pilarski, Michael R Dawson, Thomas De- gris, Farbod Fahimi, Jason P Carey, and Richard S Sutton. 2011. Online human training of a myoelec- tric prosthesis controller via actor-critic reinforce- ment learning. In IEEE International Conference on Rehabilitation Robotics (ICORR).
Carolyn C Preston and Andrew M Colman. 2000. Op- timal number of response categories in rating scales: reliability, validity, discriminating power, and re- spondent preferences. Acta Psychologica 104(1):1â 15. | 1707.07402#49 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 50 | Ella Rabinovich, Shachar Mirkin, Raj Nath Patel, Lu- cia Specia, and Shuly Wintner. 2017. Personal- ized machine translation: Preserving original author traits. Association for Computational Linguistics (ACL) .
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- International ing with recurrent neural networks. Conference on Learning Representations (ICLR) .
Rion Snow, Brendan OâConnor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fastâbut is it good?: Evaluating non-expert annotations for natu- ral language tasks. In Empirical Methods in Natural Language Processing (EMNLP).
Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016a. Learning structured predic- tors from bandit feedback for interactive NLP. In Association for Computational Linguistics (ACL).
Artem Sokolov, Julia Kreutzer, and Stefan Riezler. 2016b. Stochastic structured prediction under ban- In Advances In Neural Information dit feedback. Processing Systems (NIPS). | 1707.07402#50 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.07402 | 51 | Artem Sokolov, Stefan Riezler, and Tanguy Urvoy. 2015. Bandit structured prediction for learning from partial feedback in statistical machine translation. In In Proceedings of MT Summit XV. Miami, FL.
Ana C Tenorio-Gonzalez, Eduardo F Morales, and Luis VillaseËnor-Pineda. 2010. Dynamic reward shaping: training a robot by voice. In Ibero-American Con- ference on Artiï¬cial Intelligence. Springer, pages 483â492.
Andrea L Thomaz and Cynthia Breazeal. 2008. Teach- able robots: Understanding human teaching behav- ior to build more effective robot learners. Artiï¬cial Intelligence 172(6-7):716â737.
Andrea Lockerd Thomaz, Cynthia Breazeal, et al. 2006. Reinforcement learning with human teach- ers: Evidence of feedback and guidance with impli- cations for learning performance. In Association for the Advancement of Artiï¬cial Intelligence (AAAI).
Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning 8(3-4):229â256.
Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR). | 1707.07402#51 | Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors. | http://arxiv.org/pdf/1707.07402 | Khanh Nguyen, Hal Daumé III, Jordan Boyd-Graber | cs.CL, cs.AI, cs.HC, cs.LG | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | cs.CL | 20170724 | 20171111 | [] |
1707.06875 | 0 | 7 1 0 2
l u J 1 2 ] L C . s c [
1 v 5 7 8 6 0 . 7 0 7 1 : v i X r a
# Why We Need New Evaluation Metrics for NLG
Jekaterina Novikova, OndËrej DuËsek, Amanda Cercas Curry and Verena Rieser School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh j.novikova, o.dusek, ac293, [email protected]
# Abstract
The majority of NLG evaluation relies on automatic metrics, such as BLEU. In this paper, we motivate the need for novel, system- and data-independent au- tomatic evaluation methods: We inves- tigate a wide range of metrics, includ- ing state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reï¬ect human judge- ments of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is data- and system-speciï¬c. Nevertheless, our results also suggest that automatic metrics per- form reliably at system-level and can sup- port system development by ï¬nding cases where a system performs poorly.
# Introduction | 1707.06875#0 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 1 | # Introduction
Automatic evaluation measures, such as BLEU (Pa- pineni et al., 2002), are used with increasing fre- quency to evaluate Natural Language Generation (NLG) systems: Up to 60% of NLG research published between 2012â2015 relies on automatic metrics (Gkatzia and Mahamood, 2015). Auto- matic evaluation is popular because it is cheaper and faster to run than human evaluation, and it is needed for automatic benchmarking and tuning of algorithms. The use of such metrics is, however, only sensible if they are known to be sufï¬ciently correlated with human preferences. This is rarely the case, as shown by various studies in NLG (Stent et al., 2005; Belz and Reiter, 2006; Reiter and Belz, 2009), as well as in related ï¬elds, such as dialogue systems (Liu et al., 2016), machine translation (MT) (Callison-Burch et al., 2006), and image captioning (Elliott and Keller, 2014; Kilick- aya et al., 2017). This paper follows on from the | 1707.06875#1 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 1 | Developing neural network image classiï¬cation models often requires signiï¬cant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is ex- pensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribu- tion of this work is the design of a new search space (which we call the âNASNet search spaceâ) which enables trans- ferability. In our experiments, we search for the best con- volutional layer (or âcellâ) on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking to- gether more copies of this cell, each with their own parame- ters to design a convolutional architecture, which we name a âNASNet architectureâ. We also introduce a new regu- larization technique called ScheduledDropPath that signif- icantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is | 1707.07012#1 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 2 | above previous work and presents another evalu- ation study into automatic metrics with the aim to ï¬rmly establish the need for new metrics. We consider this paper to be the most complete study to date, across metrics, systems, datasets and do- mains, focusing on recent advances in data-driven NLG. In contrast to previous work, we are the ï¬rst to: ⢠Target end-to-end data-driven NLG, where we compare 3 different approaches. In contrast to NLG methods evaluated in previous work, our sys- tems can produce ungrammatical output by (a) generating word-by-word, and (b) learning from noisy data. ⢠Compare a large number of 21 automated met- rics, including novel grammar-based ones. ⢠Report results on two different domains and three different datasets, which allows us to draw more general conclusions. ⢠Conduct a detailed error analysis, which sug- gests that, while metrics can be reasonable indi- cators at the system-level, they are not reliable at the sentence-level. ⢠Make all associated code and data publicly avail- able, including detailed analysis results.1
# 2 End-to-End NLG Systems | 1707.06875#2 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 2 | icantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet con- structed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accu- racy than the best human-invented architectures while hav- ing 9 billion fewer FLOPS â a reduction of 28% in compu- tational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classiï¬cation are generically useful and can be trans- ferred to other computer vision problems. On the task of ob- ject | 1707.07012#2 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 3 | # 2 End-to-End NLG Systems
In this paper, we focus on recent end-to-end, data- driven NLG methods, which jointly learn sentence planning and surface realisation from non-aligned data (DuËsek and JurËc´ıËcek, 2015; Wen et al., 2015; Mei et al., 2016; Wen et al., 2016; Sharma et al., 2016; DuËsek and JurËc´ıËcek, 2016, Lampouras and Vlachos, 2016). These approaches do not require costly semantic alignment between Meaning Rep- resentations (MR) and human references (also re- ferred to as âground truthâ or âtargetsâ), but are
1Available for download at: https://github.com/ jeknov/EMNLP_17_submission
System BAGEL Dataset SFREST SFHOTEL Total LOLS RNNLG TGEN Total 202 - 202 404 581 600 - 1,181 398 477 - 875 1,181 1,077 202 2,460
Table 1: Number of NLG system outputs from dif- ferent datasets and systems used in this study. | 1707.06875#3 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 4 | based on parallel datasets, which can be collected in sufï¬cient quality and quantity using effective crowdsourcing techniques, e.g. (Novikova et al., 2016), and as such, enable rapid development of NLG components in new domains. In particular, we compare the performance of the following sys- tems: ⢠RNNLG:2 The system by Wen et al. (2015) uses a Long Short-term Memory (LSTM) network to jointly address sentence planning and surface re- alisation. It augments each LSTM cell with a gate that conditions it on the input MR, which allows it to keep track of MR contents generated so far. ⢠TGEN:3 The system by DuËsek and JurËc´ıËcek (2015) learns to incrementally generate deep- syntax dependency trees of candidate sentence plans (i.e. which MR elements to mention and the overall sentence structure). Surface realisation is performed using a separate, domain-independent rule-based module. ⢠LOLS:4 The system by Lampouras and Vlachos (2016) learns sentence planning and surface reali- sation using Locally Optimal Learning to Search (LOLS), an imitation learning framework which learns using BLEU and ROUGE as non-decomposable loss functions.
# 3 Datasets | 1707.06875#4 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 5 | # 3 Datasets
We consider the following crowdsourced datasets, which target utterance generation for spoken dia- logue systems. Table 1 shows the number of sys- tem outputs for each dataset. Each data instance consists of one MR and one or more natural lan- guage references as produced by humans, such as the following example, taken from the BAGEL dataset:5
# 2https://github.com/shawnwun/RNNLG 3https://github.com/UFAL-DSG/tgen 4https://github.com/glampouras/JLOLS_
NLG
5Note that we use lexicalised versions of SFHOTEL and SFREST and a partially lexicalised version of BAGEL, where proper names and place names are replaced by placeholders (âXâ), in correspondence with the outputs generated by the
MR: type=restaurant) Reference: âX is a moderately priced restaurant in X.â | 1707.06875#5 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 5 | In this paper, we study a new paradigm of designing con- volutional architectures and describe a scalable method to optimize convolutional architectures on a dataset of inter- est, for instance the ImageNet classiï¬cation dataset. Our approach is inspired by the recently proposed Neural Ar- chitecture Search (NAS) framework [71], which uses a re- inforcement learning search method to optimize architec- ture conï¬gurations. Applying NAS, or any other search methods, directly to a large dataset, such as the ImageNet dataset, is however computationally expensive. We there- fore propose to search for a good architecture on a proxy dataset, for example the smaller CIFAR-10 dataset, and then transfer the learned architecture to ImageNet. We achieve this transferrability by designing a search space (which we call âthe NASNet search spaceâ) so that the complexity of the architecture is independent of the depth of the network and the size of input images. More concretely, all convolu- tional networks in our search space are composed of convo- lutional layers (or âcellsâ) with identical structure but | 1707.07012#5 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 6 | MR: type=restaurant) Reference: âX is a moderately priced restaurant in X.â
SFHOTEL & SFREST (Wen et al., 2015) pro- vide information about hotels and restaurants in San Francisco. There are 8 system dialogue act types, such as inform, conï¬rm, goodbye etc. Each domain contains 12 attributes, where some are common to both domains, such as name, type, pricerange, address, area, etc., and the others are domain-speciï¬c, e.g. food and kids-allowed for restaurants; hasinternet and dogs-allowed for ho- tels. For each domain, around 5K human refer- ences were collected with 2.3K unique human ut- terances for SFHOTEL and 1.6K for SFREST. The number of unique system outputs produced is 1181 for SFREST and 875 for SFHOTEL. ⢠BAGEL (Mairesse et al., 2010) provides informa- tion about restaurants in Cambridge. The dataset contains 202 aligned pairs of MRs and 2 corre- sponding references each. The domain is a subset of SFREST, including only the inform act and 8 at- tributes.
# 4 Metrics
# 4.1 Word-based Metrics (WBMs) | 1707.06875#6 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 6 | all convolu- tional networks in our search space are composed of convo- lutional layers (or âcellsâ) with identical structure but dif- ferent weights. Searching for the best convolutional archi- tectures is therefore reduced to searching for the best cell structure. Searching for the best cell structure has two main beneï¬ts: it is much faster than searching for an entire net- work architecture and the cell itself is more likely to gener- alize to other problems. In our experiments, this approach signiï¬cantly accelerates the search for the best architectures using CIFAR-10 by a factor of 7à and learns architectures that successfully transfer to ImageNet. | 1707.07012#6 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 7 | # 4 Metrics
# 4.1 Word-based Metrics (WBMs)
NLG evaluation has borrowed a number of au- tomatic metrics from related ï¬elds, such as MT, summarisation or image captioning, which com- pare output texts generated by systems to ground- truth references produced by humans. We refer to this group as word-based metrics. In general, the higher these scores are, the better or more simi- lar to the human references the output is.6 The following order reï¬ects the degree these metrics move from simple n-gram overlap to also consid- ering term frequency (TF-IDF) weighting and se- mantically similar words. ⢠Word-overlap Metrics (WOMs): We consider frequently used metrics, including TER (Snover et al., 2006), BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), NIST (Doddington, 2002), LEPOR (Han et al., 2012), CIDEr (Vedantam et al., 2015), and METEOR (Lavie and Agarwal, 2007). ⢠Semantic Similarity (SIM): We calculate the Se- mantic Text Similarity measure designed by Han et al. (2013). This measure is based on distri- butional similarity and Latent Semantic Analysis | 1707.06875#7 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 7 | Our main result is that the best architecture found on CIFAR-10, called NASNet, achieves state-of-the-art ac- curacy when transferred to ImageNet classiï¬cation with- out much modiï¬cation. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5. This result amounts to a
1
1.2% improvement in top-1 accuracy than the best human- invented architectures while having 9 billion fewer FLOPS. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is also state-of-the-art.
Additionally, by simply varying the number of the con- volutional cells and number of ï¬lters in the convolutional cells, we can create different versions of NASNets with dif- ferent computational demands. Thanks to this property of the cells, we can generate a family of models that achieve accuracies superior to all human-invented models at equiv- alent or smaller computational budgets [60, 29]. Notably, the smallest version of NASNet achieves 74.0% top-1 ac- curacy on ImageNet, which is 3.1% better than previously engineered architectures targeted towards mobile and em- bedded vision tasks [24, 70]. | 1707.07012#7 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 8 | systems, as provided by the system authors. 6Except for TER whose scale is reversed.
(LSA) and is further complemented with semantic relations extracted from WordNet.
# 4.2 Grammar-based metrics (GBMs)
Grammar-based measures have been explored in related ï¬elds, such as MT (Gim´enez and M`arquez, 2008) or grammatical error correction (Napoles et al., 2016), and, in contrast to WBMs, do not rely on ground-truth references. To our knowledge, we are the ï¬rst to consider GBMs for sentence-level NLG evaluation. We focus on two important prop- erties of texts here â readability and grammatical- ity: | 1707.06875#8 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 8 | Finally, we show that the image features learned by NASNets are generically useful and transfer to other com- puter vision problems. In our experiments, the features learned by NASNets from ImageNet classiï¬cation can be combined with the Faster-RCNN framework [47] to achieve state-of-the-art on COCO object detection task for both the largest as well as mobile-optimized models. Our largest NASNet model achieves 43.1% mAP, which is 4% better than previous state-of-the-art.
# 2. Related Work
The proposed method is related to previous work in hy- perparameter optimization [44, 4, 5, 54, 55, 6, 40] â es- pecially recent approaches in designing architectures such as Neural Fabrics [48], DiffRNN [41], MetaQNN [3] and DeepArchitect [43]. A more ï¬exible class of methods for designing architecture is evolutionary algorithms [65, 16, 57, 30, 46, 42, 67], yet they have not had as much success at large scale. Xie and Yuille [67] also transferred learned architectures from CIFAR-10 to ImageNet but performance of these models (top-1 accuracy 72.1%) are notably below previous state-of-the-art (Table 2). | 1707.07012#8 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 9 | ⢠Readability quantiï¬es the difï¬culty with which a reader understands a text, as used for e.g. eval- uating summarisation (Kan et al., 2001) or text simpliï¬cation (Francois and Bernhard, 2014). We measure readability by the Flesch Reading Ease score (RE) (Flesch, 1979), which calculates a ra- tio between the number of characters per sentence, the number of words per sentence, and the num- ber of syllables per word. Higher RE score indi- cates a less complex utterance that is easier to read and understand. We also consider related mea- sures, such as characters per utterance (len) and per word (cpw), words per sentence (wps), syl- lables per sentence (sps) and per word (spw), as well as polysyllabic words per utterance (pol) and per word (ppw). The higher these scores, the more complex the utterance. | 1707.06875#9 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 9 | The concept of having one neural network interact with a second neural network to aid the learning process, or learn- ing to learn or meta-learning [23, 49] has attracted much attention in recent years [1, 62, 14, 19, 35, 45, 15]. Most of these approaches have not been scaled to large problems like ImageNet. An exception is the recent work focused on learning an optimizer for ImageNet classiï¬cation that achieved notable improvements [64].
The design of our search space took much inspira- tion from LSTMs [22], and Neural Architecture Search Cell [71]. The modular structure of the convolutional cell is also related to previous methods on ImageNet such as VGG [53], Inception [59, 60, 58], ResNet/ResNext [20, 68], and Xception/MobileNet [9, 24].
# 3. Method | 1707.07012#9 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 10 | ⢠Grammaticality: In contrast to previous NLG methods, our corpus-based end-to-end systems can produce ungrammatical output by (a) gener- ating word-by-word, and (b) learning from noisy data. As a ï¬rst approximation of grammatical- ity, we measure the number of misspellings (msp) and the parsing score as returned by the Stanford parser (prs). The lower the msp, the more gram- matically correct an utterance is. The Stanford parser score is not designed to measure grammat- icality, however, it will generally prefer a gram- matical parse to a non-grammatical one.7 Thus, lower parser scores indicate less grammatically- correct utterances. In future work, we aim to use speciï¬cally designed grammar-scoring functions, e.g. (Napoles et al., 2016), once they become pub- licly available.
7http://nlp.stanford.edu/software/ parser-faq.shtml
# 5 Human Data Collection | 1707.06875#10 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 10 | # 3. Method
Our work makes use of search methods to ï¬nd good con- volutional architectures on a dataset of interest. The main search method we use in this work is the Neural Architec- ture Search (NAS) framework proposed by [71]. In NAS, a controller recurrent neural network (RNN) samples child networks with different architectures. The child networks are trained to convergence to obtain some accuracy on a held-out validation set. The resulting accuracies are used to update the controller so that the controller will generate better architectures over time. The controller weights are updated with policy gradient (see Figure 1).
âSample architecture A with probability p Train a child network with architecture A to convergence to get validation accuracy R The controller (RNN)
Figure 1. Overview of Neural Architecture Search [71]. A con- troller RNN predicts architecture A from a search space with prob- ability p. A child network with architecture A is trained to con- vergence achieving accuracy R. Scale the gradients of p by R to update the RNN controller. | 1707.07012#10 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 11 | 7http://nlp.stanford.edu/software/ parser-faq.shtml
# 5 Human Data Collection
To collect human rankings, we presented the MR together with 2 utterances generated by differ- ent systems side-by-side to crowdworkers, which were asked to score each utterance on a 6-point Likert scale for: ⢠Informativeness: Does the utterance provide all the useful information from the meaning represen- tation? ⢠Naturalness: Could the utterance have been produced by a native speaker? ⢠Quality: How do you judge the overall quality of the utterance in terms of its grammatical cor- rectness and ï¬uency?
Each system output (see Table 1) was scored by 3 different crowdworkers. To reduce participantsâ bias, the order of appearance of utterances pro- duced by each system was randomised and crowd- workers were restricted to evaluate a maximum of 20 utterances. The crowdworkers were selected from English-speaking countries only, based on their IP addresses, and asked to conï¬rm that En- glish was their native language. | 1707.06875#11 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 11 | The main contribution of this work is the design of a novel search space, such that the best architecture found on the CIFAR-10 dataset would scale to larger, higher- resolution image datasets across a range of computational settings. We name this search space the NASNet search space as it gives rise to NASNet, the best architecture found in our experiments. One inspiration for the NASNet search space is the realization that architecture engineering with CNNs often identiï¬es repeated motifs consisting of com- binations of convolutional ï¬lter banks, nonlinearities and a prudent selection of connections to achieve state-of-the-art results (such as the repeated modules present in the Incep- tion and ResNet models [59, 20, 60, 58]). These observa- tions suggest that it may be possible for the controller RNN to predict a generic convolutional cell expressed in terms of these motifs. This cell can then be stacked in series to han- dle inputs of arbitrary spatial dimensions and ï¬lter depth. | 1707.07012#11 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 12 | To assess the reliability of ratings, we calculated the intra-class correlation coefï¬cient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). The overall ICC across all three datasets is 0.45 (p < 0.001), which corresponds to a moderate agreement. In general, we ï¬nd consistent differ- ences in inter-annotator agreement per system and dataset, with lower agreements for LOLS than for RNNLG and TGEN. Agreement is highest for the SFHOTEL dataset, followed by SFREST and BAGEL (details provided in supplementary material).
# 6 System Evaluation
Table 2 summarises the individual systemsâ over- all corpus-level performance in terms of automatic and human scores (details are provided in the sup- plementary material).
All WOMs produce similar results, with SIM showing different results for the restaurant domain (BAGEL and SFREST). Most GBMs show the same trend (with different levels of statistical signiï¬- cance), but RE is showing inverse results. System performance is dataset-speciï¬c: For WBMs, the LOLS system consistently produces better results on BAGEL compared to TGEN, while for SFREST and SFHOTEL, LOLS is outperformed by RNNLG in | 1707.06875#12 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 12 | In our approach, the overall architectures of the convo- lutional nets are manually predetermined. They are com- posed of convolutional cells repeated many times where each convolutional cell has the same architecture, but dif- ferent weights. To easily build scalable architectures for images of any size, we need two types of convolutional cells to serve two main functions when taking in a feature map
# Softmax
A [eae] Softmax Reduction Cell A Ly Reduction Cell Reduction Cell rN LY Reduction Cell Reduction Cell | x2 LN LY ay xN 3x3 conv stride 2 rN image image cirario imageNet architecture Architecture
Figure 2. Scalable architectures for image classiï¬cation consist of two repeated motifs termed Normal Cell and Reduction Cell. This diagram highlights the model architecture for CIFAR-10 and Ima- geNet. The choice for the number of times the Normal Cells that gets stacked between reduction cells, N , can vary in our experi- ments. | 1707.07012#12 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 13 | BAGEL SFHOTEL SFREST metric TGEN LOLS RNNLG LOLS RNNLG LOLS WOMs SIM GBMs More similar Better grammar(*) More overlap More overlap* More similar* Better grammar(*) More overlap* Better grammar More similar RE inform natural quality 4.77(Sd=1.09) 4.76(Sd=1.26) 4.77(Sd=1.19) More complex* 4.91(Sd=1.23) 4.67(Sd=1.25) 4.54(Sd=1.28) 5.47*(Sd=0.81) 4.99*(Sd=1.13) 4.54 (Sd=1.18) More complex* 5.27(Sd=1.02) 4.62(Sd=1.28) 4.53(Sd=1.26) 5.29*(Sd=0.94) 4.86 (Sd=1.13) 4.51 (Sd=1.14) More complex* 5.16(Sd=1.07) 4.74(Sd=1.23) 4.58(Sd=1.33) | 1707.06875#13 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 13 | as input: (1) convolutional cells that return a feature map of the same dimension, and (2) convolutional cells that return a feature map where the feature map height and width is re- duced by a factor of two. We name the ï¬rst type and second type of convolutional cells Normal Cell and Reduction Cell respectively. For the Reduction Cell, we make the initial operation applied to the cellâs inputs have a stride of two to reduce the height and width. All of our operations that we consider for building our convolutional cells have an option of striding. | 1707.07012#13 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 14 | Table 2: System performance per dataset (summarised over metrics), where â*â denotes p < 0.05 for all the metrics and â(*)â shows signiï¬cance on p < 0.05 level for the majority of the metrics.
terms of WBMs. We observe that human informa- tiveness ratings follow the same pattern as WBMs, while the average similarity score (SIM) seems to be related to human quality ratings.
Looking at GBMs, we observe that they seem to be related to naturalness and quality ratings. Less complex utterances, as measured by read- ability (RE) and word length (cpw), have higher naturalness ratings. More complex utterances, as measured in terms of their length (len), number of words (wps), syllables (sps, spw) and polysyl- lables (pol, ppw), have lower quality evaluation. Utterances measured as more grammatical are on average evaluated higher in terms of naturalness. These initial results suggest a relation between automatic metrics and human ratings at system level. However, average scores can be mislead- ing, as they do not identify worst-case scenarios. This leads us to inspect the correlation of human and automatic metrics for each MR-system output pair at utterance level.
# 7 Relation of Human and Automatic Metrics
# 7.1 Human Correlation Analysis | 1707.06875#14 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 14 | Figure 2 shows our placement of Normal and Reduction Cells for CIFAR-10 and ImageNet. Note on ImageNet we have more Reduction Cells, since the incoming image size is 299x299 compared to 32x32 for CIFAR. The Reduction and Normal Cell could have the same architecture, but we empirically found it beneï¬cial to learn two separate archi- tectures. We use a common heuristic to double the number of ï¬lters in the output whenever the spatial activation size is reduced in order to maintain roughly constant hidden state dimension [32, 53]. Importantly, much like Inception and ResNet models [59, 20, 60, 58], we consider the number of motif repetitions N and the number of initial convolutional ï¬lters as free parameters that we tailor to the scale of an image classiï¬cation problem.
What varies in the convolutional nets is the structures of | 1707.07012#14 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 15 | # 7 Relation of Human and Automatic Metrics
# 7.1 Human Correlation Analysis
We calculate the correlation between automatic metrics and human ratings using the Spearman coefï¬cient (Ï). We split the data per dataset and system in order to make valid pairwise com- parisons. To handle outliers within human rat- ings, we use the median score of the three human raters.8 Following Kilickaya et al. (2017), we use the Williamsâ test (Williams, 1959) to determine signiï¬cant differences between correlations. Ta- ble 3 summarises the utterance-level correlation | 1707.06875#15 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 15 | What varies in the convolutional nets is the structures of
the Normal and Reduction Cells, which are searched by the controller RNN. The structures of the cells can be searched within a search space deï¬ned as follows (see Appendix, Figure 7 for schematic). In our search space, each cell re- ceives as input two initial hidden states hi and hiâ1 which are the outputs of two cells in previous two lower layers or the input image. The controller RNN recursively pre- dicts the rest of the structure of the convolutional cell, given these two initial hidden states (Figure 3). The predictions of the controller for each cell are grouped into B blocks, where each block has 5 prediction steps made by 5 distinct softmax classiï¬ers corresponding to discrete choices of the elements of a block:
Step 1. Select a hidden state from hi, hiâ1 or from the set of hidden states created in previous blocks.
Step 2. Select a second hidden state from the same options as in Step 1.
Step 3. Select an operation to apply to the hidden state selected in Step 1.
Step 4. Select an operation to apply to the hidden state selected in Step 2.
Step 5. Select a method to combine the outputs of Step 3 and 4 to create a new hidden state. | 1707.07012#15 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 16 | results between automatic metrics and human rat- ings, listing the best (i.e. highest absolute Ï) re- sults for each type of metric (details provided in supplementary material). Our results suggest that: ⢠In sum, no metric produces an even moderate correlation with human ratings, independently of dataset, system, or aspect of human rating. This contrasts with our initially promising results on the system level (see Section 6) and will be further dis- cussed in Section 8. Note that similar inconsisten- cies between document- and sentence-level eval- uation results are observed in MT (Specia et al., 2010). ⢠Similar to our results in Section 6, we ï¬nd that WBMs show better correlations to human ratings of informativeness (which reï¬ects content selec- tion), whereas GBMs show better correlations to quality and naturalness. ⢠Human ratings for informativeness, naturalness and quality are highly correlated with each other, with the highest correlation between the latter two (Ï = 0.81) reï¬ecting that they both target surface realisation. ⢠All WBMs produce similar results (see Figure 1 and 2): They are strongly correlated with each other, and most of them produce correlations with human ratings which | 1707.06875#16 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 16 | Step 4. Select an operation to apply to the hidden state selected in Step 2.
Step 5. Select a method to combine the outputs of Step 3 and 4 to create a new hidden state.
The algorithm appends the newly-created hidden state to the set of existing hidden states as a potential input in sub- sequent blocks. The controller RNN repeats the above 5 prediction steps B times corresponding to the B blocks in a convolutional cell. In our experiments, selecting B = 5 provides good results, although we have not exhaustively searched this space due to computational limitations.
In steps 3 and 4, the controller RNN selects an operation to apply to the hidden states. We collected the following set of operations based on their prevalence in the CNN litera- ture:
identity ⢠1x7 then 7x1 convolution ⢠3x3 average pooling ⢠5x5 max pooling ⢠1x1 convolution ⢠3x3 depthwise-separable conv ⢠5x5 depthwise-seperable conv ⢠7x7 depthwise-separable conv
1x3 then 3x1 convolution ⢠3x3 dilated convolution ⢠3x3 max pooling ⢠7x7 max pooling ⢠3x3 convolution | 1707.07012#16 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 17 | ⢠All WBMs produce similar results (see Figure 1 and 2): They are strongly correlated with each other, and most of them produce correlations with human ratings which are not signiï¬cantly different from each other. GBMs, on the other hand, show greater diversity. ⢠Correlation results are system- and dataset- speciï¬c (details provided in supplementary mate- rial). We observe the highest correlation for TGEN on BAGEL (Figures 1 and 2) and LOLS on SFREST, whereas RNNLG often shows low correlation be- tween metrics and human ratings. This lets us conclude that WBMs and GBMs are sensitive to different systems and datasets. ⢠The highest positive correlation is observed be- tween the number of words (wps) and informative8As an alternative to using the median human judgment for each item, a more effective way to use all the human judgments could be to use Hovy et al. (2013)âs MACE tool for inferring the reliability of judges. | 1707.06875#17 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 17 | 1x3 then 3x1 convolution ⢠3x3 dilated convolution ⢠3x3 max pooling ⢠7x7 max pooling ⢠3x3 convolution
In step 5 the controller RNN selects a method to combine the two hidden states, either (1) element-wise addition be- tween two hidden states or (2) concatenation between two hidden states along the ï¬lter dimension. Finally, all of the unused hidden states generated in the convolutional cell are concatenated together in depth to provide the ï¬nal cell out- put.
To allow the controller RNN to predict both Normal Cell and Reduction Cell, we simply make the controller have 2 à 5B predictions in total, where the ï¬rst 5B predictions are for the Normal Cell and the second 5B predictions are for the Reduction Cell.
1 {new hidden layer! 1 i is Saracen eae 3 Ss hidden state | hidden state [\ \ t \ \ \ \ ab \ \ \ \ g, 4 nN 4 \ > 83 \ \ \ \ \ \ \ \ 7 \ \ 7 - - - y L t repeat B times A | hidden layer A q | hidden layer B q 1
# toe | 1707.07012#17 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 18 | BAGEL SFHOTEL SFREST TGEN LOLS RNNLG LOLS RNNLG LOLS 0.30* (BLEU-1) -0.19* (TER) -0.16* (TER) 0.33* (wps) -0.25* (len) -0.19* (cpw) 0.20* (ROUGE) -0.19* (TER) 0.16* (METEOR) 0.16* (ppw) -0.28* (wps) 0.31* (prs) 0.09 (BLEU-1) 0.10* (METEOR) 0.10* (METEOR) -0.09 (ppw) -0.17* (len) -0.16* (ppw) 0.14* (LEPOR) -0.20* (TER) -0.12* (TER) 0.13* (cpw) -0.18* (sps) -0.17* (spw) 0.13* (SIM) 0.17* (ROUGE) 0.09* (METEOR) 0.11* (len) -0.19* (wps) 0.11* (prs) | 1707.06875#18 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.