doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1610.04286
13
Partial success on transferring from simulation to a real robot has been reported [18, 19, 20]. They focus primarily on the problem of transfer from a more restricted simpler version of a task to the full, more difficult version. While transfer from simulation to reality remains difficult, progress has been made with directly learning neural network control policies on a real robot, both from low-dimensional representations of the state and from visual input (e.g. [21],[22]). While the results are impressive, to achieve sufficient data efficiency these works currently rely on relatively restrictive task setups, specialized visual architectures, and carefully designed training regimes. Alternative approaches embrace big data ideas for robotics ([23, 24]). # 4 Experiments For training in simulation, we use the Asynchronous Advantage Actor-Critic (A3C) framework introduced in [6]. Compared to DQN [25], the model simultaneously learns a policy and a value function for predicting expected future rewards, and can be trained with CPUs, using multiple threads. A3C has been shown to converge faster than DQN, which makes it advantageous for research experimentation.
1610.04286#13
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
14
For the manipulation domain of the Jaco arm, the agent policy controls nine degrees of freedom using velocity commands. This includes six joints on the arm plus three actuated fingers. The full policy Π(A|s, θ) comprises nine joint policies learnt by the agent, each one a softmax connected to the inputs from the previous layer and any lateral connections. Each joint policy i has three actions (a fixed positive velocity, a fixed negative velocity, and a zero velocity): πi(ai|s; θi). This discrete action set, while potentially lacking the precision of a continuous control policy, has worked well in practice. There is also a single value function that is linearly connected to the previous layer and lateral layers: V (s, θv).
1610.04286#14
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
15
We evaluate both feedforward and recurrent neural networks. Both have convolutional input layers followed by either a fully connected layer or an LSTM. A standard-sized network is used for the simulation-trained column and a reduced-capacity network is used for the robot-trained columns, chosen because we found empirically that more capacity does not accelerate learning (see Section4.2), presumably because of the features reused from the previous column. Details of the architecture are given in Figure 2 and Table 1. In all variants, the input is 3x64x64 pixels and the output is 28 (9 discrete joint policies plus one value function). The MuJoCo physics simulator [26] is used to train the first column for our experiments, with a rendered camera view to provide observations. In the real domain, a similarly positioned RGB camera provides the input. While the modeled Jaco and its dynamics are quite accurate, the visual discrepancies are obvious, as shown in Figure 3.
1610.04286#15
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
16
The experiments are all focused around the task of reaching to a visual target, with only pure rewards provided as feedback (no shaped rewards). Though simple, this task requires that the state of the arm and the position of the target are correctly inferred from visual observations, and that the agent learns robust control over a high-dimensional state space. The arm is set to a random start position at the beginning of every episode, and the target is placed randomly within a 40cm by 30cm area. The agent receives a reward of +1 if its palm is within 10cm of the target, and episodes last for at most 4 ala] [xl LSTM DH” i fe ‘ nye cam Poo Pa 2 ——— conv, 1? 1,2 | t feedforward wide narrow recurrent wide narrow fc (output) LSTM fc conv 2 conv 1 params 28 - 512 32 16 621K 28 128 128 32 16 39K 299K 28 - 32 8 8 28 16 16 8 8 37K Figure 2: Detailed schematic of progressive recurrent network architecture. The activations of the LSTM are connected as inputs to the progressive column. The factored policy and single value function are shown. _
1610.04286#16
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
18
Figure 3: Sample images from the real camera input image and the MuJoCo-rendered image. Though a more realistic model appearance could have been used, the blocky Jaco model was used to accelerate MuJoCo rendering, which was done on CPUs. The images show the diversity of Jaco start positions and target positions. 50 steps. Though there is some variance due to randomized starting states, a well-performing agent can achieve an average score of over 30 points by quickly reaching to the target and remaining in safe positions at all times. The episode is terminated if the agent causes a safety violation through self-intersection, by touching the table top, or by exceeding set joint limits. # 4.1 Training in simulation The first column is trained in simulation using A3C, as previously mentioned, using a wide feedfor- ward or recurrent network. Intuitively, it makes sense to use a larger capacity network for training in simulation, to reach maximum performance. We verified this intuition by comparing wide and narrow Simulation-trained first column 4o Simulation-trained first column (LSTM) —— Wide network — Narrow network —— Narrow network — Wide network 0 1 2 3 4 0 1 2 3 4 Steps wv Steps
1610.04286#18
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
19
Figure 4: Learning curves are shown for wide and narrow versions of the feedforward (left) and recurrent (right) models, which are trained with the MuJoCo simulator. The plots show mean and variance over 5 training runs with different seeds and hyperparameters. Stable performance is reached after approximately 50 million steps, which is more than one million episodes. While both the feedforward and the recurrent models learn the task, the recurrent network reaches a higher final mean score. 5 Real-robot-trained progressive nets vs. baselines 35 25 Wide column (progressive) Narrow column (progressive) Wide column (finetuned) Narrow column (from scratch) Wide column (from scratch) Rewards 15 ie} 10000 20000 30000 40000 50000 60000 Steps Figure 5: Real robot training: We compare progressive, finetuning, and ‘from scratch’ learning curves. All experiments use a recurrent architecture, trained on the robot, from RGB inputs. We compare wide and narrow columns for both the progressive experiments and the randomly initialized baseline. For all results, a median- filtered solid curve is shown overlaid on the raw rewards (dotted line). The ‘from scratch’ baseline was a randomly initialized narrow or wide column, both of which fail to get any reward during training.
1610.04286#19
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
20
network architectures, and found that the narrow network had slower learning and worse performance (see Figure 4). We also see that the LSTM model out-performs the feedforward model by an average of 3 points per episode. Even on this relatively simple task, full performance is only achieved after substantial interaction with the environment, on the order of 50 million steps - a number which is infeasible with a real robot. The simulation training, compared with the real robot, is accelerated because of fast rendering, multithreaded learning algorithms, and the ability to continuously train without human involvement. We calculate that learning this task, which trains to convergence in 24 hours using a CPU compute cluster, would take 53 days on the real robot even with continuous training for 24 hours a day. Moreover, multiple experiments in parallel were used to explore hyperparameters in simulation; this sort of search would multiply the hypothetical real robot training time. In simulation, we explore learning rates and entropy costs, which are sampled uniformly at random on a log scale. Learning rates are sampled between 5e-5 and 5e-3 and entropy costs between 1e-5 and 1e-2. The configuration with the best final performance from a grid of 30 is chosen as first column. For real Jaco experiments, both learning rates and entropy costs were optimized separately using a simulated transfer experiment with a single-threaded agent (A2C).
1610.04286#20
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
21
# 4.2 Transfer to the robot To train on the real Jaco, a flat target is manually repositioned within a 40cm by 30cm area on every third episode. Rewards are given automatically by tracking the colored target and giving reward based on the position of the Jaco gripper with respect to it. We train a baseline from scratch, a finetuned first column, and a progressive second column. Each experiment is run for approximately 60000 steps (about four hours). The baseline is trained by randomly initializing a narrow network and then training. We also try a randomly initialized wide network. As seen in Figure 5 (green curve), the randomly initialized column fails to learn and the agent gets zero reward throughout training. The progressive second column gets to 34 points, while the experiment with finetuning, which starts with the simulation-trained column and continues training on the robot, does not reach the same score as the progressive network. Finetuning vs. progressive approaches. The progressive approach is clearly well-suited for contin- ual learning scenarios, where it is important to mitigate forgetting of previous tasks while supporting transfer to new tasks, but the advantage is less intuitive for curricula of tasks where the focus is on 6
1610.04286#21
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
22
6 Subtle perspective changes Significant perspective changes Subtle color changes Significant color changes 25 40 — finetuned — finetuned 35 — progressive 30 35 — finetuned 30 — progressive — finetuned — progressive — progressive $25 Final rewards ° 0 50 100 150 200 250 300 0 50 100 150 200 250 0 50 100 150 200 250 300 Trials sorted by decreasing final reward: Trials sorted by decreasing final rewards Trials sorted by decreasing final rewards Trials sorted by decreasing final 0 50 100 150 200 250 300 # Final rewards Figure 6: To analyse the relative stability and performance of finetuning vs. progressive approaches, we add color or perspective changes to the environment in simulation and then train 300 networks with different random seeds, learning rates, and entropy costs. The progressive networks have significantly higher performance and less sensitivity to hyperparameter selection for all four experiments.
1610.04286#22
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
23
maximising transfer learning. To assess this empirically, we start with a simulator-trained first column, as described above, and then either finetune that column or add a narrow progressive column and retrain for the reacher task under a variety of conditions, including small or large color changes and small or large perspective changes. For each of these environment perturbations, we train 300 times with different seeds, learning rates, and entropy costs, which are the most sensitive hyperparameters. As shown in Figure 6, we find that progressive networks are more stable and reach higher final performance than finetuning. # 4.3 Transfer to a dynamic robot task with proprioception
1610.04286#23
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
24
# 4.3 Transfer to a dynamic robot task with proprioception Unlike the finetuning paradigm, which is unable to accommodate changing network morphology or new input modalities, progressive nets offer a flexibility that is advantageous for transferring to new data sources while still leveraging previous knowledge. To demonstrate this, we train a second column on the reacher task but add proprioceptive features as an additional input, alongside the RGB images. The proprioceptive features are joint angles and velocities for each of the 9 joints of the arm and fingers, 18 in total, input to a MLP (a single linear layer plus ReLU) and joined with the outputs of the convolutional stack. Then, a third progressive column is added that only learns from the proprioceptive features, while the visual input is forwarded through the previous columns and the features are used via the lateral connections. A diagram of this architecture is shown in Figure 7 (left).
1610.04286#24
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
25
To evaluate this architecture, we train on a dynamic target task. By employing a small motorized pulley, the red target is smoothly translated across the table with random reversals in the motion, creating a tracking task that requires a different control policy while maintaining a similar visual presentation. Other aspects of the task, including rewards and episode lengths, were kept the same. If the second column is trained on this conveyor task, the learning is relatively slow, and full performance is reached after 50000 steps (about 4 hours). If the second column is instead trained on the static reacher task, and the third column is then trained on the conveyor task, we observe immediate transfer, and full performance is reached almost immediately (Figure 7, right). This demonstrates both the utility of progressive nets for curriculum tasks, as well as the capability of the architecture to immediately reuse previously learnt features. # 5 Discussion Transfer learning, the ability to accumulate and transfer knowledge to new domains, is a core characteristic of intelligent beings. Progressive neural networks offer a framework that can be used for continual learning of many tasks and which facilitates transfer learning, even across the divide which separates simulation and robot. We took full advantage of the flexibility and computational scaling afforded by simulation and compared many hyperparameters and architectures for a random start, random target control task with visual input, then successfully transferred the skill to an agent training on the real robot.
1610.04286#25
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
26
In order to fulfill the potential of deep reinforcement learning applied in real-world robotic domains, learning needs to become many times more efficient. One route to achieving this is via transfer learning from simulation-trained agents. We have described an initial set of experiments that prove that progressive nets can be used to achieve reliable, fast transfer for pixel-to-action RL policies. 7 300 # rewards Real-robot-trained progressive nets (conveyor task) 30 : ; O f i ISL | | @ — Progressive 3 column x | x @ x ——— Progressive 2 column . . . 0 static task | Static task dynamic task 0 10000 20000 30000 40000 50000 60000 Steps
1610.04286#26
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
27
Figure 7: Real robot training results are shown for the dynamic ‘conveyor’ task. A three-column architecture is depicted (left), in which vision (x) is used to train column one, vision and proprioception (φ) are used in column two, and only proprioception is used to train column three. Encoder 1 is a convolutional net, encoder 2 is a convolutional net with proprioceptive features added before the LSTM, and encoder 3 is an MLP. The learning curves (right) show the results of training on a conveyor (dynamic target) task. If the conveyor task is learned as the third column, rather than the second, then the learning is significantly faster. # References [1] S. Levine and P. Abbeel. under unknown dynamics. and K. Q. Weinberger, pages 1071–1079. Curran Associates, 5444-learning-neural-network-policies-with-guided-policy-search-under-unknown-dynamics. pdf. [2] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.
1610.04286#27
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
28
[3] N. Heess, G. Wayne, D. Silver, T. P. Lillicrap, T. Erez, and Y. Tassa. Learning continuous con- In Advances in Neural Information Processing Sys- trol policies by stochastic value gradients. tems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2944–2952, 2015. URL http://papers.nips.cc/paper/ 5796-learning-continuous-control-policies-by-stochastic-value-gradients. [4] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1509.02971. [5] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
1610.04286#28
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
29
[6] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Int’l Conf. on Machine Learning (ICML), 2016. [7] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In ICML 2016, 2016. [8] A. Rusu, N. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. [9] X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3d models. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 1278–1286, 2015.
1610.04286#29
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
30
[10] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for CNN: viewpoint estimation in images using cnns trained with rendered 3d model views. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2686–2694, 2015. 8 [11] M. Long, Y. Cao, J. Wang, and M. I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 97–105, 2015. [12] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 4068–4076, 2015. [13] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. CoRR, abs/1412.3474, 2014. URL http://arxiv.org/abs/1412.3474.
1610.04286#30
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
31
[14] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T. Darrell. Towards adapting deep visuomotor representations from simulated to real environments. CoRR, abs/1511.07111, 2015. URL http://arxiv.org/abs/1511.07111. [15] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. [16] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand. Domain-adversarial neural networks. CoRR, abs/1412.4446, 2014. URL http://arxiv.org/abs/1412.4446. [17] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pages 343–351, 2016.
1610.04286#31
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
32
[18] S. Barrett, M. E. Taylor, and P. Stone. Transfer learning for reinforcement learning on a physical robot. In Ninth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learning Agents Workshop (AAMAS - ALA), 2010. [19] S. James and E. Johns. 3D Simulation for Robot Arm Control with Deep Q-Learning. ArXiv e-prints, 2016. [20] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 3357–3364. IEEE, 2017. [21] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1–40, 2016. [22] S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, pages 156–163, 2015.
1610.04286#32
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.04286
33
[23] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In ICRA 2016, 2016. [24] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016. [25] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [26] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems IROS, 2012. 9
1610.04286#33
Sim-to-Real Robot Learning from Pixels with Progressive Nets
Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.
http://arxiv.org/pdf/1610.04286
Andrei A. Rusu, Mel Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, Raia Hadsell
cs.RO, cs.LG
null
null
cs.RO
20161013
20180522
[ { "id": "1606.04671" } ]
1610.02850
0
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS # Impatient DNNs – Deep Neural Networks with Dynamic Time Budgets # t c O 0 1 # Manuel Amthor [email protected] # Erik Rodner [email protected] Computer Vision Group Friedrich Schiller University Jena Germany www.inf-cv.uni-jena.de ] # ren # Joachim Denzler [email protected] # V C . s c [ # Abstract 1 v 0 5 8 2 0 . 0 1 6 1 : v i X r 1 a We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e. a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications.
1610.02850#0
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.02850
1
Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN archi- tectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines. # Introduction Deep and especially convolutional neural networks are the current base for the majority of state-of-the-art approaches in vision. Their ability to learn very effective representations of visual data has led to several breakthroughs in important applications, such as scene un- derstanding for autonomous driving [1], object detection [6], and robotics [4]. The main obstacle for their application is still the computational cost during prediction for a new test image. Many previous works have focused on speeding up DNN inference in general achiev- ing constant speed-ups for a certain loss in prediction accuracy [10, 16].
1610.02850#1
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
1
# Abstract Most existing machine translation systems op- erate at the level of words, relying on ex- plicit segmentation to extract tokens. We in- troduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any seg- mentation. We employ a character-level con- volutional network with max-pooling at the encoder to reduce the length of source rep- resentation, allowing the model to be trained at a speed comparable to subword-level mod- els while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword- level encoder on WMT’15 DE-EN and CS- EN, and gives comparable performance on FI- EN and RU-EN. We then demonstrate that it is possible to share a single character- level encoder across multiple languages by training a model on a many-to-one transla- the tion task. character-level encoder significantly outper- forms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the mul- tilingual character-level translation even sur- passes the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment. # Introduction
1610.03017#1
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
2
In contrast, we focus on inference with dynamic time budgets. Our networks provide a series of predictions with increasing computational cost and accuracy. This allows for (1) dynamic interruption of the prediction in time-critical applications (anytime ability, Figure 1 left), or for (2) predictions with a dynamic time budget individually given for each test im- age a-priori (Figure 1 right). Dynamic budget approaches can for example deal with varying energy resources, a property especially useful for real-time visual inference in robotics [17]. Furthermore, early predictions allow for immediate action selection in reinforcement learn- ing scenarios [21]. © 2016. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. 1 2 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS @ a) a) >) () © < © « © © c < s Ss Ss s Ss S S 3 3 g X i 3 3 3 3 5 3 3 3 2 2 2 © 2 2 2 Ed Ed 5 5 Ey ES ES I TL tT v 2 . THGaNT HAT GRIN tH $s Q Q s time > time > ~ interruptable CNN CNN with dynamic budget (anytime ability) given a-priori # Fr 3 §
1610.02850#2
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
2
# Introduction Nearly all previous work in machine translation has been at the level of words. Aside from our intu- ∗The majority of this work was completed while the author was visiting New York University. itive understanding of word as a basic unit of mean- ing (Jackendoff, 1992), one reason behind this is that sequences are significantly longer when rep- resented in characters, compounding the problem of data sparsity and modeling long-range depen- dencies. This has driven NMT research to be al- most exclusively word-level (Bahdanau et al., 2015; Sutskever et al., 2015).
1610.03017#2
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
3
# Fr 3 § Figure 1: Illustration of convolutional neural network prediction in dynamic budget scenar- ios: (left) prediction can be interrupted at any time or (right) the budget is given before each prediction. The main idea of our approach is to formulate the learning of dynamic budget predictors as a generalized risk minimization that involves the distribution of budgets provided for the application. The distribution of possible budgets has been either previously neglected or as- sumed to be uniform [12]. However, we show that such an easily available prior information can significantly help to improve the expected accuracy. Our formulation leads to a straight-forward modification of convolutional neural network (CNN) architectures and their training. In particular, we add several early prediction and loss layers along the standard processing pipeline of a DNN (Figure 1 and Figure 2). Accord- ing to our risk minimization framework for dynamic budget predictors, all of these layers need to be learned jointly with a weighted combination derived from a time-budget distri- bution. Whereas this strategy is directly related to DNN learning strategies, such as deep supervision [24] and inception architectures [23], we demonstrate its usefulness for adapting to varying resources during testing.
1610.02850#3
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
3
remarkable success, word-level NMT models suffer from several major weaknesses. For one, they are unable to model rare, out-of- vocabulary words, making them limited in translat- ing languages with rich morphology such as Czech, Finnish and Turkish. If one uses a large vocabulary to combat this (Jean et al., 2015), the complexity of training and decoding grows linearly with respect to the target vocabulary size, leading to a vicious cycle. To address this, we present a fully character-level NMT model that maps a character sequence in a source language to a character sequence in a target language. We show that our model outperforms a baseline with a subword-level encoder on DE-EN and CS-EN, and achieves a comparable result on FI-EN and RU-EN. A purely character-level NMT model with a basic encoder was proposed as a base- line by Luong and Manning (2016), but training it was prohibitively slow. We were able to train our model at a reasonable speed by drastically reducing the length of source sentence representation using a stack of convolutional, pooling and highway layers. One advantage of character-level models is that they are better suited for multilingual translation than their word-level counterparts which require a separate word vocabulary for each language. We
1610.03017#3
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
4
The paper is structured as follows. After discussing related work, we define dynamic budget predictors and derive a new learning framework based on risk minimization with budget distributions (Sect. 2). Our framework can be directly applied to deep and especially convolutional neural networks as described in Sect. 3. Experiments in Sect. 4 show the advantages of our approach for different architectures, datasets, and budget distributions. Related work on anytime prediction The work of Karayev et al. [12] presented an ap- proach that iteratively and dynamically selects feature representations to maximize the area above an entropy vs. cost curve. Our approach however focuses on a static order of predic- tors and is able to incorporate budget distributions expected for the application. Fröhlich et al. [5] proposed a semantic segmentation approach with anytime classification capability. Their method is based on random decision forests learned in a layer-wise fashion. Xu et al. [26] considers anytime classification with unknown budgets by combining a cost-sensitive support vector machine with feature learning. Similar to [5], their predictors are learned in a greedy fashion and not learned jointly as in our case. Learning all of the predictors with shared parameters jointly allows us to share computations while directly optimizing with respect to expected accuracy during training. The paper of [25] presents an algorithm for
1610.02850#4
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
4
verify this by training a single model to translate four languages (German, Czech, Finnish and Rus- sian) to English. Our multilingual character-level model outperforms the subword-level baseline by a considerable margin in all four language pairs, strongly indicating that a character-level model is more flexible in assigning its capacity to different language pairs. Furthermore, we observe that our multilingual character-level translation even exceeds the quality of bilingual translation in three out of four language pairs, both in BLEU score metric and human evaluation. This demonstrates excel- lent parameter efficiency of character-level transla- tion in a multilingual setting. We also showcase our model’s ability to handle intra-sentence code- switching while performing language identification on the fly. The contributions of this work are twofold: we empirically show that (1) we can train character-to- character NMT model without any explicit segmen- tation; and (2) we can share a single character-level encoder across multiple languages to build a mul- tilingual translation system without increasing the model size. # 2 Background: Attentional Neural Machine Translation
1610.03017#4
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
5
MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS learning tree ensembles with a constrained time budget available during training. case, the whole distribution budgets is given during training. Related work on deep supervision and DNNs with multiple losses There are multiple methods that use a similar architecture of deep neural networks than ours characterized by multiple loss layers and joint training of them. For example, [24] refers to such a training strategy as “deep supervision” and shows that it allows for training deeper networks in a robust fashion. A very similar technique has been used in [7] for improved scene recognition. Furthermore, multiple loss layers are often used for multi-task learning, where the goal is to jointly predict various outputs [27]. In contrast to these works, our paper focuses on the impact of such an architecture on the ability of DNNs to deal with dynamic time budgets during inference. Furthermore, we show that such an architectural design can be directly derived from a very general risk minimiza- tion framework for predictors with dynamic budgets.
1610.02850#5
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
5
# 2 Background: Attentional Neural Machine Translation Neural machine translation (NMT) is a recently proposed approach to machine translation that builds a single neural network which takes as an input a source sentence X = (a1,...,a7,) and generates its translation Y = (y1,...,y7,), where xz and y are source and target symbols (Bahdanau et al., 2015; Sutskever et al., 2015; Luong et al., 2015; Cho et al., 2014a). Attentional NMT models have three components: an encoder, a decoder and an attention mechanism. Encoder Given a source sentence X, the en- coder constructs a continuous representation that summarizes its meaning with a recurrent neural network (RNN). A_ bidirectional RNN is often implemented as proposed in (Bahdanau et al., 2015). A forward encoder reads the input sentence from left to right! hy = fenc(Ex(#:), by-1). Similarly, a backward encoder reads it from right o_ : to left: hy = Fone (Ex(we), ev), where E,, is
1610.03017#5
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
6
Related work on speeding up convolutional neural networks There are multiple works that focus on speeding up DNNs and the special case of convolutional neural networks (CNNs). Applied and adapted techniques range from low-rank approximations [2, 6, 10] to FFT computations of the involved convolutions [19]. The Fast R-CNN method of [6] speeds up fully-connected layers by simple SVD approximation. Similar techniques have been presented by [2] and [10]. The paper of [8] provides an empirical study of the effects of CNN architectural design choices on the computation time and the achieved recognition performance. A straightforward technique to speed up convolutions with large filter sizes uses Fast Fourier Transforms as studied by [19]. Furthermore, efficient filtering techniques, such as the Winograd transformation [14], are applicable as well. Our approach also tries to speed up inference of deep neural networks, i.e. a forward pass. However, instead of approximating different operations performed in single layers, we achieve a significant speed-up by allowing the algorithm to deal with dynamic time budgets. Therefore, our research is orthogonal to the one briefly described and combining them is straightforward. # 2 Learning Dynamic Budget Predictors
1610.02850#6
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
6
the source embedding lookup table, and Fen and fenc are recurrent activation functions such as long short-term memory units (LSTMs, (Hochreiter and Schmidhuber, 1997)) or gated recurrent units (GRUs, (Cho et al., 2014b)). The encoder constructs a set of continuous source sentence representations C by concatenating the forward and backward hid- den states at each timestep: C = {hi, ..., hry }, Se : where hy = [hu; hi]. Attention First introduced in (Bahdanau et al., 2015), the attention mechanism lets the decoder at- tend more to different source symbols for each target symbol. More concretely, it computes the context vector cy at each decoding time step t’ as a weighted sum of the source hidden states: cy = wy ay¢hy. Similarly to (Chung et al., 2016; Firat et al., 2016a), each attentional weight a, represents how relevant the t-th source token 2; is to the t’-th target token yz’, and is computed as: 1 ant = 780 (seore(Ey(ue—1)-81-—1-he) J (1)
1610.03017#6
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
7
# 2 Learning Dynamic Budget Predictors In this section, we derive a simple yet powerful learning scheme for dynamic budget predic- tors. Without loss of generality, we focus on time budgets in the following. Specification of dynamic budgets An important challenge for dynamic budget approaches is that the budget available for inference during testing is not known during training and for anytime scenarios even not known during inference itself. For anytime tasks, we need to learn algorithms that can be interrupted at several time steps and balance the trade-off be- tween calculating direct predictions of an output y for an example xxx or performing calculating intermediate outputs that help later on for further refinements of the predictions. This trade-off is without any further specification, ill-posed. However, in many appli- cations, we know something about the distribution p(t | xxx, y) of time budgets t available to the algorithm for a given input-output pair (xxx, y). In the following, we assume that this distribution is either given or can be modeled for an application. 3 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 4
1610.02850#7
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
7
1 ant = 780 (seore(Ey(ue—1)-81-—1-he) J (1) where Z = Se exp(score(Ey(yy—1), Sv—1, hx) is the normalization constant. score() is a feed- forward neural network with a single hidden layer that scores how well the source symbol 2; and the target symbol y match. Ey is the target embedding lookup table and sy is the target hidden state at time t. Decoder Given a source context vector c,, the de- coder computes its hidden state at time t/ as: sy = Feec (Ey (yr—1), Sv—1, Cv’). Then, a parametric func- tion out;() returns the conditional probability of the next target symbol being k: (ye =klycv, X) = 1 (2) zor (ous, (Ev); Sy, =) where Z is again the normalization constant: Z=0j exp (out; (Ey (ys—1), $7, €v")).Training The entire model can be trained end-to- end by minimizing the negative conditional loglikelihood, which is defined as: 1 Ty” _ 1 km), (n) fay > > log p(y = yf lyse, X™),
1610.03017#7
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
8
3 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 4 Risk minimization with budget distributions In the following, we develop a framework for learning dynamic budget predictors using risk minimization. We consider inference al- gorithms f that provide predictions y ∈ Y for input examples xxx ∈ X at different times t ∈ R, i.e. we have f : X × R → Y. Learning the parameters θθθ of f is done by minimizing the following regularized risk: argminθθθ t∈R y∈Y xxx∈X L( f (xxx,t; θθθ ), y) · p(xxx, y,t) dxxx dy dt + R(θθθ ) , (1) with L being a suitable loss function, R(θθθ ) being a regularization term, and p(xxx, y,t) being the joint distribution of an input-output pair (xxx, y) and the available time t. This formulation does not require any differentiation between a-priori given budget or anytime scenarios.
1610.02850#8
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.02850
9
We further assume that the time available is independent of the actual example and it’s label. This is a reasonable assumption, since the available time is in most applications just based on a limitation of hardware or data transfer resources. Since we are given a training set D = (xxxi, yi)n n argming [ LLU: 8).99) POG + REE) . Q) JtER jay The predictor f is an algorithm performing a finite sequence of atomic operations. Therefore, the prediction output will be only changing at discrete time steps t1, . . . ,tK: f (xxx,t; θθθ ) = f (xxx,tk; θθθ ) def.= fk(xxx; θθθ k), f (xxx,t; θθθ ) = fK(xxx; θθθ K) . Furthermore, before t1, no output estimate is available. Since this leads to a constant additive term independent of θθθ , we can ignore this aspect in the following. In total, Eq. (2) simplifies as follows: K n argming )” we: (Zecieac00.09) +R(8) , ©) k=] i=l
1610.02850#9
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
9
# 3 Fully Character-Level Translation # 3.1 Why Character-Level? The benefits of character-level translation over word-level translation are well known. Chung et al. (2016) present three main arguments: character level models (1) do not suffer from out-of-vocabulary is- sues, (2) are able to model different, rare morpho- logical variants of a word, and (3) do not require seg- mentation. Particularly, text segmentation is highly non-trivial for many languages and problematic even for English as word tokenizers are either manually designed or trained on a corpus using an objective function that is unrelated to the translation task at hand, which makes the overall system sub-optimal. Here we present two additional arguments for character-level translation. First, a character-level translation system can easily be applied to a mul- tilingual translation setting. Between European lan- guages where the majority of alphabets overlaps, for instance, a character-level model may easily iden- tify morphemes that are shared across different lan- guages. A word-level model, however, will need a separate word vocabulary for each language, allow- ing no cross-lingual parameter sharing.
1610.03017#9
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
10
K n argming )” we: (Zecieac00.09) +R(8) , ©) k=] i=l with weights wz = Se p(t)dt for 1 <k < K and wx = fr p(t)dt. As can be seen we have a simple learning objective, which is a weighted combination of the learning objectives of each of the individual predictors f;. If some of the parameters are shared between the predictors, which is the case for our approach presented in Sect. 3, each term in the objective can not be optimized independently and joint optimization is necessary. Sharing parameters is essential for optimizing shared computations towards maximizing the expected accuracy of the complete model. The information about the time-budget distribution defines the weights of the loss terms in an intuitive manner: if there is a high probability of the time budget being between tk and tk+1, the loss of fk has a strong impact on the overall learning objective and the parameters θθθ k including the shared ones should be tuned towards reducing the loss of fk rather than contributing significantly to other predictors. # 3 Learning Impatient DNNs with Early Prediction Layers In this section, we show how a single deep neural network with additional prediction layers is well suited for providing a series of prediction models. (3) (4)
1610.02850#10
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
10
Also, by not segmenting source sentences into words, we no longer inject our knowledge of words and word boundaries into the system; instead, we encourage the model to discover an internal struc- ture of a sentence by itself and learn how a sequence of symbols can be mapped to a continuous meaning representation. # 3.2 Related Work To address these limitations associated with word- level translation, a recent line of research has inves- tigated using sub-word information. Costa-Juss´a and Fonollosa (2016) replaced the word-lookup table with convolutional and highway layers on top of character embeddings, while still segmenting source sentences into words. Target sen- tences were also segmented into words, and predic- tion was made at word-level. Similarly, Ling et al. (2015) employed a bidi- rectional LSTM to compose character embeddings into word embeddings. At the target side, another LSTM takes the hidden state of the decoder and generates the target word, character by character. While this system is completely open-vocabulary, it also requires offline segmentation. Also, character- to-word and word-to-character LSTMs significantly slow down training.
1610.03017#10
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
11
In this section, we show how a single deep neural network with additional prediction layers is well suited for providing a series of prediction models. (3) (4) MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS inputs labels _time-budget distribution batch convi. architectures a 4 norm. Tat [eariyipreun TossT for early prediction layers I pooll I batch conve early prediction (fe only) i norm. relu2__|{early pred.2}{__ loss2 ~L_ie 7 I pool2 i batch conv3 | \_ i norm. relu3__}{early pred. 3 Toss3 T ~ combined early prediction (ava) batch conv4 | NN loss ‘spatial avg norm E ool | Le} - relud_|{early pred. 4}{~ loss I batch conv E norm. relu5, éarly pred. 5 T0855 I pools early prediction (avg 4x4) To Spatial avg T | pool 4x4 —>[_fe relu6 I 7 T Veta] \ I 18 Toss6 Figure 2: (Left) Modification of the AlexNet architecture for dynamic budgets and early predictions. (Right) Possible architectures for early prediction.
1610.02850#11
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
11
Most recently, Luong and Manning (2016) pro- posed a hybrid scheme that consults character-level information whenever the model encounters an out- of-vocabulary word. As a baseline, they also imple- mented a purely character-level NMT model with 4 layers of unidirectional LSTMs with 512 cells, with attention over each character. Despite being ex- tremely slow (approximately 3 months to train), the character-level model gave comparable performance to the word-level baseline. This shows the possibil- ity of fully character-level translation. Having a word-level decoder restricts the model to only being able to generate previously seen words. Sennrich et al. (2015) introduced a subword-level NMT model that is capable of open-vocabulary translation using subword-level segmentation based on the byte pair encoding (BPE) algorithm. Starting from a character vocabulary, the algorithm identi- fies frequent character n-grams in the training data and iteratively adds them to the vocabulary, ulti- mately giving a subword vocabulary which consists of words, subwords and characters. Once the seg- mentation rules have been learned, their model per- forms subword-to-subword translation (bpe2bpe) in the same way as word-to-word translation.
1610.03017#11
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
12
Figure 2: (Left) Modification of the AlexNet architecture for dynamic budgets and early predictions. (Right) Possible architectures for early prediction. Early prediction layers To obtain a series of predictions, we add K additional layers to a common DNN architecture as illustrated in Figure 2. We refer to these layers as early predic- tion (EP) layers in the following. The output fk(xxx) of these layers has as many dimensions as y. Already after the first layers, our approach is able to perform predictions with only a very few number of computational operations. The layered architecture of a DNN has an important advantage, since all fk naturally share a large set of their parameters and also a large number of computations. Anytime approaches require a forward pass to go through all early prediction layers that can be processed until interruption. In case of non-parallel computation, the computational overhead of the early prediction layers should therefore be reduced as much as possible.
1610.02850#12
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
12
Perhaps the work that is closest to our end goal is (Chung et al., 2016), which used a subword-level encoder from (Sennrich et al., 2015) and a fully character-level decoder (bpe2char). Their results show that character-level decoding performs better than subword-level decoding. Motivated by this work, we aim for fully character-level translation at both sides (char2char). Outside NMT, our work is based on a few exist- ing approaches that applied convolutional networks to text, most notably in text classification (Zhang et al., 2015; Xiao and Cho, 2016). Also, we drew in- spiration for our multilingual models from previous work that showed the possibility of training a single recurrent model for multiple languages in domains other than translation (Tsvetkov et al., 2016; Gillick et al., 2015). # 3.3 Challenges Sentences are on average 6 (DE, CS and RU) to 8 (FI) times longer when represented in characters. This poses three major challenges to achieving fully character-level translation.
1610.03017#12
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
13
The right part of Figure 2 shows different choices for EP layers we experimented with: (1) FC only, which is a simple single fully-connected (FC) layer followed by a softmax layer, (2) AVG, which performs average pooling across the spatial dimensions of previous layer before a fully-connected layer, which leads to a significantly reduced number of parameters for the EP layers, and (3) AVG 4 × 4, which allows for preserving rough spatial information by performing average pooling in 4 × 4 = 16 uniformly-sized regions. Learning with weighted losses For learning, each of the EP layers is connected to a loss layer. The overall loss during training is exactly the weighted combination we derived in the previous section in Eq. (2). In theory, training our Impatient DNNs does not require any further modifications and learning can be done with standard back-propagation and gradient-descent. However, we observed in experiments that batch normalization [9] leads to a significantly more robust training and is even required to achieve convergence at all in most cases. 5 6 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
1610.02850#13
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
13
Sentences are on average 6 (DE, CS and RU) to 8 (FI) times longer when represented in characters. This poses three major challenges to achieving fully character-level translation. (1) Training/decoding latency For the decoder, although the sequence to be generated is much longer, each character-level softmax operation costs considerably less compared to a word- or subword- level softmax. Chung et al. (2016) report that character-level decoding is only 14% slower than subword-level decoding. On the other hand, computational complexity of the attention mechanism grows quadratically with respect to the sentence length, as it needs to attend to every source token for every target token. This makes a naive character-level approach, such as in (Luong and Manning, 2016), computationally pro- hibitive. Consequently, reducing the length of the source sequence is key to ensuring reasonable speed in both training and decoding. (2) Mapping character sequence to continu- ous representation The arbitrary relationship be- tween the orthography of a word and its meaning is a well-known problem in linguistics (de Saus- sure, 1916). Building a character-level encoder is arguably a more difficult problem, as the encoder needs to learn a highly non-linear function from a long sequence of character symbols to a meaning representation.
1610.03017#13
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
14
(3) Long range dependencies in characters A character-level encoder needs to model dependen- cies over longer timespans than a word-level en- coder does. # 4 Fully Character-Level NMT # 4.1 Encoder We design an encoder that addresses all the chal- lenges discussed above by using convolutional and pooling layers aggressively to both (1) drastically shorten the input sentence and (2) efficiently capture local regularities. Inspired by the character-level language model from (Kim et al., 2015), our encoder first reduces the source sentence length with a series of convolutional, pooling and highway layers. The shorter representation, instead of the full character sequence, is passed through a bidirectional GRU to (3) help it resolve long term dependencies. We illustrate the proposed encoder in Figure 1 and discuss each layer in detail below. Embedding We map the sequence of source characters of dc: character X = (C(x1), . . . , C(xTx)) ∈ Rdc×Tx where Tx is the number of source characters and C is the character embedding lookup table: C ∈ Rdc×|C|.
1610.03017#14
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
15
Figure 3: Types of time-budget distributions we consider in our paper. Weighting schemes In our experiments, we are interested in the effect of different time- budget distributions provided during learning. To simulate them, we consider the following schemes for early prediction layer weights w;,...,wx: (STD) standard DNN training, i.e. only the last prediction matters: wx = 1 and w; = 0 otherwise, (EQ) uniform weights for uniform time-budget distributions: w, = t, (LIN) linearly increasing weights, i.e. small time budgets are unlikely: wx « k, (POLY) polynomially increasing weights: wy « kY with y> 1, CLIN, IPOLY) decreasing weights, i.e. small time budgets are likely: wz, = Wrel-k for weights w), of the former schemes, and (NORM) small and large time budgets are rare and layers in the middle of the architecture are given a high weight: w; « exp(—B - (k — Kat)?) with B = 0.34. All of these schemes are simulating different budget specifications of an application. An illustration of several instances is given in Figure 3. # 4 Experiments In the following, we evaluate our approach with respect to different dynamic budget schemes and compare with standard DNN training and other relevant baselines.
1610.02850#15
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
15
Convolution One-dimensional convolution opera- tion is then used along consecutive character embed- dings. Assuming we have a single filter f ¢ R¢*” of width w, we first apply padding to the begin- ning and the end of X, such that the padded sen- tence X’ € R&ex(f=+¥—-)) is w — 1 symbols longer. We then apply narrow convolution between X’ and f such that the k-th element of the output Y;, is given as: Yn = (X'* f)g = > (Xi ew 1m) ® f), (3) ig where ® denotes elementwise matrix multiplica- tion and * is the convolution operation. X [ k—-w+L:k] is the sliced subset of X’ that contains all the rows but only w adjacent columns. The padding scheme employed above, commonly known as half convolu- tion, ensures the length of the output is identical to the input’s: Y ¢ R!*7?=, We just illustrated how a single convolutional filter of fixed width might be applied to a sentence. In order to extract informative character patterns of different lengths, we employ a set of filters of varying widths. More concretely, we use a filter
1610.03017#15
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
16
# 4 Experiments In the following, we evaluate our approach with respect to different dynamic budget schemes and compare with standard DNN training and other relevant baselines. Experimental setup and datasets For evaluation, we conducted experiments on two ob- ject classification datasets. The 15-Scenes [15] dataset comprises a total of 4,485 images covering categories from kitchen and living room to suburban and industrial. Each category contains between 200 and 400 images each, from which we took 100 images for training, as suggested by [15], and the remaining ones for testing. The training set is further divided into 90 images for actual training and 10 images for validation. The MIT-67 [20] indoor scenes database is comprised of 67 categories. We follow the procedure of [20] and take 80 images for training and 20 for testing. Again, the training set is split in order to obtain a validation set of 8 images per class. Since our datasets are too small for DNN training from scratch, we perform fine-tuning of different models pre-trained on ImageNet, e.g. AlexNet [13] and VGG19 [22]. The positions of EP layers for AlexNet are given in Figure 2. For VGG19, we add EP layers after each block of convolutional layers. Please note that the last “early” prediction layer is always the output layer of the original CNN architecture.
1610.02850#16
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
16
IRN*(x/s) IRN*(x/s) RYxT. l (he+w-1) | l + RiXTs er sjon| P ! ‘= > # Racx Single-layer Bidirectional GRU Four-layer Highway Network # Segment Embeddings # Max Pooling with Stride 5 # Single-layer Convolution ReLU # Character # Embeddings Figure 1: Encoder architecture schematics. Underscore denotes padding. A dotted vertical line delimits each segment. The stride of pooling s is 5 in the diagram. bank F = {fi,...,fm} where f; = Réexixni ig a collection of n; filters of width 7. Our model uses m = 8, hence extracts character n-grams up to 8 characters long. Outputs from all the filters are stacked upon each other, giving a single repre- sentation Y € RN*?=, where the dimensionality of each column is given by the total number of filters N = SC", n;. Finally, rectified linear activation (ReLU) is applied elementwise to this representation. at increased training time. We chose s = 5 in our experiments as it gives a reasonable balance between the two.
1610.03017#16
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
17
Analysis of learning Impatient DNNs In the following, we show that for learning Impa- tient DNNs care has to be taken to ensure convergence. For example, an adequate learning rate has to be determined to ensure convergence of the network while avoiding saturation at low accuracy. This becomes much more important when dealing with losses of multiple branches, since the gradients at shared layers accumulate leading to the network training MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 0.7 es g¢ 2 9g we 5 © 07 Prediction 1 Prediction 2 Prediction 3 Prediction 4 Prediction 5 Prediction 6 accuracy on validation set © 0.0 0 500 1000 1500 2000 0 20 40 60 80 100 epochs epochs accuracy on validation set Figure 4: Convergence during learning an Impatient AlexNet trained on MIT-67 with (right) and without (left) batch normalization: Different colors indicate individual early prediction layers and it can be clearly seen that batch normalization significantly improves stability during training. being more fragile. Especially in the case of deeper network architectures, e.g. VGG, we observed that convergence can not be achieved at all without proper normalization.
1610.02850#17
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
17
at increased training time. We chose s = 5 in our experiments as it gives a reasonable balance between the two. Highway network A sequence of segment embed- dings from the max pooling layer is fed into a high- way network (Srivastava et al., 2015). Highway net- works are shown to significantly improve the qual- ity of a character-level language model when used with convolutional layers (Kim et al., 2015). A high- way network transforms input x with a gating mech- anism that adaptively regulates information flow: Max pooling with stride The output from the con- volutional layer is first split into segments of width s, and max-pooling over time is applied to each seg- ment with no overlap. This procedure selects the most salient features to give a segment embedding. Each segment embedding is a summary of meaning- ful character n-grams occurring in a particular (over- lapping) subsequence in the source sentence. Note that the rightmost segment (above ‘on’) in Figure 1 may capture ‘son’ (the filter in green) although ‘s’ occurs in the previous segment. In other words, our segments are overlapping as opposed to in word- or subword-level models with hard segmentation.
1610.03017#17
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
18
being more fragile. Especially in the case of deeper network architectures, e.g. VGG, we observed that convergence can not be achieved at all without proper normalization. Therefore, we made use of batch normalization [9] which rectifies the covariate shift in the input data distribution of each convolution layer. This technique allows for training with much higher learning rates ensuring faster convergence and in our case convergence at all. In Figure 4 (left), an example of optimizing an Impatient AlexNet is shown where the validation accuracy for early prediction layers saturates very slow at a low value caused by a highly decreased learning rate of 10−4. Even no convergence is achieved for very early layers after running 2000 epochs of training. In contrast, adding batch normalization (right- hand side) allows for a 100× higher learning rate resulting in very fast convergence at a high level of validation accuracy for all prediction layers.
1610.02850#18
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
18
Segments act as our internal linguistic unit from this layer and above: the attention mechanism, for instance, attends to each source segment instead of source character. This shortens the source repre- sentation s-fold: Y’ €¢ RN*(7:/s), Empirically, we found using smaller s leads to better performance y =9 © ReLU(W 2 + bi) + (1-9) Ox, where g = σ((W2x + b2)). We apply this to each segment embedding individually. Recurrent from the highway layer is given to a bidirectional GRU from §2, using each segment embedding as input. Subword-level encoder Unlike a subword-level encoder, our model does not commit to a specific is instead trained to choice of segmentation; consider every possible character pattern and extract only the most meaningful ones. Therefore, the definition of segmentation in our model is dynamic unlike subword-level encoders. During training, the model finds the most salient character patterns in a sentence via max-pooling, and the character
1610.03017#18
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
19
Evaluation of early prediction architectures As presented in Sect. 3, several architec- tures are possible for early prediction. The straightforward approach of connecting FC layers directly to each convolutional layer leads to a huge amount of additional parameters to be optimized. These layers are prone to overfitting. This can be seen in the learning statistics for MIT67 with a VGG19 base architecture shown in Figure 5. The training loss is near zero together with a moderate validation accuracy for early layers. We also experimented with multiple FC layers. However, learning of these architectures failed to converge in all cases independently from the choice of hyperparameters. By applying spatial pooling lay- ers, validation accuracy is substantially improved, which can be seen in Figure 5 (AVG and AVG4x4). Especially AVG4x4 provides rough spatial information which helps to improve performance even further. Therefore, we use this architecture in the following experiments. In the last two columns of Table 1, average computation times according to the particular weighting schemes and budget distributions are presented for a single image. If inference is performed up to a particular prediction layer known in advance,
1610.02850#19
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
19
Vocab size Source emb. Target emb. Conv. filters Pool stride Highway Encoder Decoder 24,440 512 512 300 128 512 200-200-250-250 -300-300-300-300 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs Table 1: Bilingual model architectures. The char2char model uses 200 filters of width 1, 200 filters of width 2, · · · and 300 filters of width 8. sequences extracted by the model change over the course of training. This is in contrast to how BPE segmentation rules are learned: the segmentation is learned and fixed before training begins. # 4.2 Attention and Decoder Similarly to the attention model in (Chung et al., 2016; Firat et al., 2016a), a single-layer feedforward network computes the attention score of next target character to be generated with every source segment representation. A standard two-layer character-level decoder then takes the source context vector from the attention mechanism and predicts each target character. This decoder was described as base de- coder by Chung et al. (2016). # 5 Experiment Settings # 5.1 Task and Models
1610.03017#19
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
20
the particular weighting schemes and budget distributions are presented for a single image. If inference is performed up to a particular prediction layer known in advance, previous prediction layers do not have to be assessed and we achieve low prediction times tB without additional overhead. Interruptable prediction in anytime scenario A (tA) requires inference of all intermediate In the worst case, i.e. the prediction layers caused by the potential sudden interruption. forward pass includes all prediction layers, average computation time increases compared to
1610.02850#20
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
20
# 5 Experiment Settings # 5.1 Task and Models We evaluate the proposed character-to-character (char2char) translation model against subword- and bpe2char) on level baselines the WMT’15 DE→EN, CS→EN, FI→EN and RU→EN translation tasks.1 We do not consider word-level models, as it has already been shown that subword-level models outperform them by mit- igating issues inherent to closed-vocabulary transla- tion (Sennrich et al., 2015; Sennrich et al., 2016). Indeed, subword-level NMT models have been the de-facto state-of-the-art and are now used in a very large-scale industry NMT system to serve millions of users per day (Wu et al., 2016). 1http://www.statmt.org/wmt15/translation -task.html We experiment in two different scenarios: 1) a bilingual setting where we train a model on data from a single language pair; and 2) a multilingual setting where the task is many-to-one translation: we train a single model on data from all four lan- guage pairs. Hence, our baselines and models are:
1610.03017#20
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
21
7 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS 10 408 mmm FC 2 o7| {i FC Mmm AVG S$ o6|| El AVG We AVG4x4|} 5 05|| HEE AVG4x4 S o4 FS 5 o3 a P02 5 oa 107 0.0 EPL =P2 =P3 era EPS EPL eP2 eP3 EPS eS eG Early Prediction Layer Early Prediction Layer Figure 5: Comparison of different early prediction architectures of an Impatient VGG19 trained on MIT-67. Replacing fully-connected layers (FC) by spatial average pooling (AVG & AVG4x4) reduces the effect of overfitting resulting in higher validation accuracy. the scenario with a-priori given budgets. All experiments were performed on an NVIDIA GeForce GTX 970 GPU.
1610.02850#21
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
21
(a) bilingual bpe2bpe: from (Firat et al., 2016a). (b) bilingual bpe2char: from (Chung et al., 2016). (c) bilingual char2char (d) multilingual bpe2char (e) multilingual char2char We train all the models ourselves other than (a), for which we report the results from (Firat et al., 2016a). We detail the configuration of our models in Table 1 and Table 2. # 5.2 Datasets and Preprocessing We use all available parallel data on the four lan- guage pairs from WMT’15: DE-EN, CS-EN, FI-EN and RU-EN. For the bpe2char baselines, we only use sentence pairs where the source is no longer than 50 subword symbols. For our char2char models, we only use pairs where the source sentence is no longer than 450 characters. For all the language pairs apart from FI-EN, we use newstest-2013 as a develop- ment set and newstest-2014 and newstest-2015 as test sets. For FI-EN, we use newsdev-2015 and newstest-2015 as development and test sets respec- tively. We tokenize2 each corpus using the script from Moses.3
1610.03017#21
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
22
the scenario with a-priori given budgets. All experiments were performed on an NVIDIA GeForce GTX 970 GPU. Does joint training of EP layers help? The most interesting question, however, is whether our joint training scheme motivated in Sect. 2 provides superior results compared to learning predictors independently. To answer this question, we compared our approach with different baselines that learn several SVM classifiers based on extracted CNN features [3] at each early prediction layer. We optimize SVM hyperparameters on the validation set to allow fair comparison. The underlying networks, on the contrary, differ in the sense that we made use of an original CNN pre-trained on ImageNet and a pre-trained CNN fine-tuned on the current dataset. In Table 1, the evaluation for different time-budget distributions is presented where each result shows the expected accuracy according to the particular weighting scheme and budget distribution. It can be clearly seen that the original CNN (ORIG) without the adaptation to the current dataset performs worst. By applying fine-tuning (FT), however, accuracy can be noticeably increased for all early prediction SVMs.
1610.02850#22
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
22
When training bilingual bpe2char models, we ex- tract 20,000 BPE operations from each of the source and target corpus using a script from (Sennrich et al., 2015). This gives a source BPE vocabulary of size 20k−24k for each language. # 5.3 Training Details Each model is trained using stochastic gradient de- scent and Adam (Kingma and Ba, 2014) with learn- ing rate 0.0001 and minibatch size 64. Training con- tinues until the BLEU score on the validation set 2This is unnecessary for char2char models, yet was carried out for comparison. 3https://github.com/moses-smt/mosesdecod er Vocab size Source emb. Target emb. Conv. filters Pool stride Highway Encoder Decoder 54,544 512 512 400 128 512 200-250-300-300 -400-400-400-400 5 4 layers 1-layer 512 GRUs 2-layer 1024 GRUs Table 2: Multilingual model architectures. stops improving. The norm of the gradient is clipped with a threshold of 1 (Pascanu et al., 2013). All weights are initialized from a uniform distribution [−0.01, 0.01]. Each model is trained on a single pre-2016 GTX Titan X GPU with 12GB RAM. # 5.4 Decoding Details
1610.03017#22
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
23
Our joint learning of the EP layers provides superior results in almost all scenarios. Es- pecially in the case of small time budgets our method benefits from taking the budget distri- bution during learning into account resulting in an improvement of almost 10% on MIT-67 and 6% on 15-Scenes for an Impatient VGG19 compared to the best performing baseline. For extreme weighting schemes with high priority on later predictions (POLY ), fine-tuning of the original networks provides slightly better results compared to our approach. This is not surprising since in this case training is very similar to that of standard DNNs with only one final loss layer. In Table 2, we compared our approach to state-of-the-art results for MIT-67 and 15- Scenes. Although the focus of this paper is rather on anytime capability while running It should be the risk of dropping accuracy at final layers, we achieved superior results. noted that only the last layer is used to obtain predictions, since we assume to have no budget restrictions. Especially for the jointly trained Impatient VGG19 on MIT-67, it was even possible to outperform the standard fine-tuned CNN, which supports the idea of “deep supervision” [24].
1610.02850#23
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
23
Each model is trained on a single pre-2016 GTX Titan X GPU with 12GB RAM. # 5.4 Decoding Details As from (Chung et al., 2016), a two-layer unidirec- tional character-level decoder with 1024 GRU units is used for all our experiments. For decoding, we use beam search with length-normalization to penal- ize shorter hypotheses. The beam width is 20 for all models. # 5.5 Training Multilingual Models Task description We train a model on a many-to- one translation task to translate a sentence in any of the four languages (German, Czech, Finnish and Russian) to English. We do not provide a language identifier to the encoder, but merely the sentence itself, encouraging the model to perform language identification on the fly. In addition, by not providing the language identifier, we expect the model to handle intra-sentence code-switching seamlessly.
1610.03017#23
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
24
Model architecture The multilingual char2char model uses slightly more convolutional filters than the bilingual char2char model, namely (200-250- 300-300-400-400-400-400). Otherwise, the archi- tecture remains the same as shown in Table 1. By not changing the size of the encoder and the decoder, we fix the capacity of the core translation module, and only allow the multilingual model to detect more character patterns. Similarly, the multilingual bpe2char model has the same encoder and decoder as the bilingual bpe2char model, but a larger vocabulary. We learn 50,000 multilingual BPE operations on the multilingual corpus, resulting in 54,544 subwords. See Table 2 for the exact configuration of our multilingual models.
1610.03017#24
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
25
VGG19 BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS ∅tB [ms] ∅tA [ms] EQ LIN POLY ILIN IPOLY NORM 46.65 54.19 62.82 37.25 25.63 47.53 48.07 56.52 67.07 37.71 25.65 47.90 53.93 60.55 69.66 45.62 35.11 55.38 83.37 85.87 88.71 77.56 70.14 84.46 84.28 87.47 91.71 77.73 69.85 84.74 85.63 88.02 90.88 80.87 75.93 86.67 1.11 1.37 1.72 0.82 0.50 1.07 1.19 1.47 1.84 0.86 0.51 1.15 ALEXNET BUDGET SCHEME MIT-67 ORIG FT OURS 15-Scenes ORIG FT OURS ∅tB [ms] ∅tA [ms] EQ LIN POLY ILIN IPOLY NORM 41.75 45.19 48.50 36.64 28.69 43.97 46.19 50.96 56.29 39.59 30.17 47.80 48.40 52.13 55.76 42.91
1610.02850#25
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
25
Data scheduling For the multilingual models, an appropriate scheduling of data from different lan- guages is crucial to avoid overfitting to one language too soon. Following (Firat et al., 2016a; Firat et al., 2016b), each minibatch is balanced, in that the pro- portion of each language pair in a single minibatch corresponds to that of the full corpus. With this minibatch scheme, roughly the same number of up- dates is required to make one full pass over the entire training corpus of each language pair. Minibatches from all language pairs are combined and presented to the model as a single minibatch. See Table 3 for the minibatch size for each language pair. DE-EN CS-EN FI-EN RU-EN corpus size minibatch size 4.5m 14 12.1m 37 1.9m 6 2.3m 7 Table 3: The minibatch size of each language (second row) is proportionate to the number of sentence pairs in each corpus (first row). Treatment of Cyrillic To facilitate cross-lingual pa- rameter sharing, we convert every Cyrillic charac- ter in the Russian source corpus to Latin alphabet according to ISO-9. Table 4 shows an example of how this conversion may help the multilingual mod- els identify lexemes that are shared across multiple languages.
1610.03017#25
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
26
school schools CS RU RU (ISO-9) ˇskoly школа школы ˇskoly ˇskola ˇskola Table 4: Czech and Russian words for school and schools, alongside the conversion of Russian characters into Latin. Multilingual BPE For the multilingual bpe2char model, multilingual BPE segmentation rules are extracted from a large dataset containing training source corpora of all the language pairs. To ensure the BPE rules are not biased towards one language,
1610.03017#26
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
27
Table 1: Comparison of Impatient AlexNet (top) and VGG19 (bottom) CNNs with sev- eral baselines. Performance is measured by expected accuracy in % based on the particular budget distribution. Dataset Orig FT Ours (eq) Ours (poly) PlacesCNN [28] [18]∗ MIT-67 15-Scenes 65.0% 71.04% 67.23% 88.30% 92.83% 92.13% 71.71% 91.45% 68.24% 90.19% 71.5% Table 2: How good are our VGG19 Impatient Networks when there are no budget restrictions during testing? The table shows the accuracy of the last prediction layer also compared to state-of-the-art results. ∗ The method of [18] requires more than 4s per image. In particular, interrupting the network at a certain depth might already provide the correct decision which renders further computation unnecessary. To implement the idea of efficient inference, an adequate stopping criterion has to be defined. Since each early prediction layer provides probabilistic outputs, we applied uncertainty-based decision making by calculating the ratio between the two highest class probabilities, which is known as 1-vs-2 strategy [11]. If the current prediction of class probabilities is characterized by a high ratio, inference can be interrupted.
1610.02850#27
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
27
Setting Src Trg Dev Test1 Test2 DE-EN CS-EN FI-EN RU-EN (a)∗ (b) (c) (d) (e) (f)∗ (g) (h) (i) (j) (k)∗ (l) (m) (n) (o) (p)∗ (q) (r) (s) (t) bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bi bi bi multi multi bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe bpe char bpe char bpe char char char char bpe char char char char bpe char char char char bpe char char char char 24.13 25.64 26.30 24.92 25.67 21.24 22.95 23.38 23.27 24.09 13.15 14.54 14.18 14.70 15.96 21.04 21.68 21.75 21.75 22.20 24.59 25.77 24.54 25.13 23.78 24.08 24.27 25.01 26.21 26.80 26.31 26.33 24.00 25.27 25.83 25.23 25.79 20.32 22.40 22.46 22.42 23.24 12.24 13.98
1610.03017#27
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
28
The analysis of the proposed criterion can be seen in Figure 6 showing time-accuracy plots. Thereby, one point on the red graph is obtained by a fixed ratio threshold which determines whether an early layer prediction already reaches sufficient certainty and thus provides the final decision. The blue graph, however, represents classification results of each early prediction layer itself, i.e., the final decision is made at always the same depth, inde- pendently of the underlying ratio. As can be seen, by using uncertainty-based predictions, accuracy can be increased substantially in a lot of cases with the same computational efforts. For example, by interrupting the AlexNet network at the fifth prediction layer consistently takes ∼ 1 ms per image for MIT-67 (second-last plot in Figure 6). In contrast, using the proposed criterion, accuracy can be increased from 53% up to 57% while still requiring ex- actly the same computation time on average. An entropy-based criterion achieved inferior performance in our experiments. Qualitative results In Figure 7, qualitative results for the task of scene recognition (class “bathroom” from MIT-67) are shown. Different numbers in each image indicate the early 9 10 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS
1610.02850#28
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.02850
29
9 10 MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS Boe & g accuracy on test set gg [= Ours (uncertainty) ours (uncertainty) — ours (anytime £9) — ours (anytime £9) ours (uncenainyy (anytime EQ) [= ours (uncertainty yytime EQ) average time per image in ms average time per image in ms average time per image in ms average time per image in ms Figure 6: Evaluation of uncertainty-based predictions compared to early layer predictions. From left to right: Impatient AlexNet on 15-Scenes, Impatient VGG19 on 15-Scenes, Impa- tient AlexNet on MIT-67, and Impatient VGG19 on MIT-67.
1610.02850#29
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
29
Table 5: BLEU scores of five different models on four language pairs. For each test or development set, the best performing model is shown in bold. (∗) results are taken from (Firat et al., 2016a). larger datasets such as Czech and German corpora are trimmed such that every corpus contains an approximately equal number of characters. # 6 Quantitative Analysis # 6.1 Evaluation with BLEU Score In this section, we first establish our main hypothe- ses for introducing character-level and multilingual models, and investigate whether our observations support or disagree with our hypotheses. From our (1) if fully empirical results, we want to verify: character-level translation outperforms subword- level translation, (2) in which setting and to what extent is multilingual translation beneficial and (3) if multilingual, character-level translation achieves superior performance to other models. We outline our results with respect to each hypothesis below.
1610.03017#29
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
30
Figure 7: Images of the MIT-67 first correctly classified as “bathroom” at different early prediction layers of an Impatient VGG19 CNN. The position of the layers is highlighted as a number and a uniquely colored border. prediction layer in which the particular example was first correctly classified. It can be clearly seen that the examples already decided at EP1 are white colored bathrooms with clearly visible toilet bowl, shower, and sink. With increasing complexity of the scene, layer depth increases as well to provide correct decisions. For example, the right most images in the second row of Figure 7 shows extraordinary bathrooms of unusual colored walls and furnishings increasing the likelihood of confusion with other classes, e.g. children room. # 5 Conclusions
1610.02850#30
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
30
subword-level In a bilin- (1) Character- vs. gual setting, the char2char model outperforms both subword-level baselines on DE-EN (Table 5 (a-c)) and CS-EN (Table 5 (f-h)). On the other two language pairs, it exceeds the bpe2bpe model and achieves similar performance with the bpe2char baseline (Table 5 (k-m) and (p-r)). We conclude that the proposed character-level model is comparable to or better than both subword-level baselines. the Meanwhile, character-level surpasses the subword-level encoder consistently in all the language pairs (Table 5 (d-e), (i-j), (n-o) and (s-t)). From this, we conclude that translating at the level of characters allows the model to discover shared constructs between languages more effectively. This also demonstrates that the character-level model is more flexible in assigning model capacity to different language pairs.
1610.03017#30
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
31
# 5 Conclusions In this paper, we presented impatient deep neural networks that tackle the problem of classi- fication with dynamic time budgets during application. Compared to standard DNNs which suffer from a high computational demand during inference, we showed that our approach allows for anytime prediction, i.e. a possible interruption at multiple stages while still pro- viding output estimates which renders our method suitable even for real-time applications. We presented a novel general framework of learning dynamic budget predictors based on risk minimization, which we adapted directly to state-of-the-art convolutional neural network ar- chitectures by branching additional early prediction layers with weighted losses. Based on a set of object classification datasets and architectures, we showed that our approach pro- vides superior results for different time budget distributions. Furthermore, we developed an uncertainty-based prediction framework allowing for reducing computational costs while still providing the same accuracy. MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS # References [1] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. arXiv preprint arXiv:1604.01685, 2016.
1610.02850#31
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
31
(2) Multilingual vs. bilingual At the level of char- acters, we note that multilingual translation is indeed strongly beneficial. On the test sets, the multilin- gual character-level model outperforms the single- pair character-level model by 2.64 BLEU in FI-EN (Table 5 (m, o)) and 0.78 BLEU in CS-EN (Ta- ble 5 (h, j)), while achieving comparable results on DE-EN and RU-EN. At the level of subwords, on the other hand, we do not observe the same degree of performance benefit from multilingual translation. Also, the multilingual bpe2char model requires much more updates to reach the performance of the bilingual
1610.03017#31
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
32
[2] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Ex- ploiting linear structure within convolutional networks for efficient evaluation. CoRR, abs/1404.0736, 2014. [3] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. [4] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. In ICRA, 2016. [5] Björn Fröhlich, Erik Rodner, and Joachim Denzler. As time goes by: Anytime semantic segmentation with iterative context forests. In Symposium of the German Association for Pattern Recognition (DAGM), pages 1–10, 2012. [6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierar- chies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
1610.02850#32
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
32
Adequacy Fluency Setting Src Trg Raw (%) Stnd. (σ) Raw (%) Stnd. (σ) DE-EN bi (a) (b) bi (c) multi bpe char char char char char 65.47 68.11 67.80 -0.0536 0.0509 0.0281 68.64 68.80 68.92 0.0052 0.0468 0.0282 CS-EN bi (d) (e) bi (f) multi bpe char char char char char 62.76 60.78 63.03 0.0361 -0.0154 0.0415 61.62 63.37 65.08 -0.0285 0.0410 0.1047 FI-EN (g) (h) (i) bi bi multi bpe char char char char char 47.03 50.17 50.95 -0.1326 -0.0650 -0.0110 59.33 59.97 63.26 -0.0329 -0.0216 0.0969 RU-EN (j) (k) (l) bi bi multi bpe char char char char char 61.26 64.06 64.77 -0.1062 0.0105 0.0116 57.74 59.85 63.32 -0.0592 0.0168 0.1748
1610.03017#32
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
33
[7] Sheng Guo, Weilin Huang, and Yu Qiao. Locally-supervised deep hybrid model for scene recognition. arXiv preprint arXiv:1601.07576, 2016. [8] Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. CoRR, abs/1412.1710, 2014. URL http://arxiv.org/abs/1412.1710. [9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [10] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. [11] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning In Computer Vision and Pattern Recognition, 2009. CVPR for image classification. 2009. IEEE Conference on, pages 2372–2379. IEEE, 2009.
1610.02850#33
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
33
Table 6: Human evaluation results for adequacy and fluency. We present both the averaged raw scores (Raw) and the averaged standardized scores (Stnd.). Standardized adequacy is used to rank the systems and standardized fluency is used to break ties. A positive standardized score should be interpreted as the number of standard deviations above this particular worker’s mean score that this system scored on average. For each language pair, we boldface the best performing model with statistical significance. When there is a tie, we boldface both systems. This suggests bpe2char model (see Figure 2). that learning useful subword segmentation across languages is difficult. (3) Multilingual char2char vs. others The mul- tilingual char2char model is the best performer in CS-EN, FI-EN and RU-EN (Table 5 (j, o, t)), and is the runner-up in DE-EN (Table 5 (e)). The fact that the multilingual char2char model outperforms the single-pair models goes to show the parameter efficiency of character-level translation: instead of training N separate models for N language pairs, it is possible to get better performance with a single multilingual character-level model.
1610.03017#33
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
34
[12] Sergey Karayev, Mario Fritz, and Trevor Darrell. Anytime recognition of objects and scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 572–579, 2014. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with In Advances in neural information processing deep convolutional neural networks. systems, pages 1097–1105, 2012. [14] Andrew Lavin. Fast algorithms for convolutional neural networks. abs/1509.09308, 2015. CoRR, MANUEL AMTHOR, ERIK RODNER, AND JOACHIM DENZLER: IMPATIENT DNNS [15] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 2169–2178. IEEE, 2006. [16] Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. arXiv preprint arXiv:1506.02515, 2015.
1610.02850#34
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
34
Approximately 1k turkers assessed a single test set (3k sentences in newstest-2014) for each system and language pair. Each turker conducted a mini- mum of 100 assessments for quality control, and the set of scores generated by each turker was standard- ized to remove any bias in the individual’s scoring strategy. We consider three models (bilingual bpe2char, bilingual char2char and multilingual char2char) for the human evaluation. We leave out the multilingual bpe2char model to minimize the number of similar systems to improve the interpretability of the evalu- ation overall. # 6.2 Human Evaluation It is well known that automatic evaluation met- rics such as BLEU encourage reference-like transla- tions and do not fully capture true translation qual- ity (Callison-Burch, 2009; Graham et al., 2015). Therefore, we also carry out a recently proposed evaluation from (Graham et al., 2016) where we have human assessors rate both (1) adequacy and (2) fluency of each system translation on a scale from 0 to 100 via Amazon Mechanical Turk. Adequacy is the degree to which assessors agree that the system translation expresses the meaning of the reference translation. Fluency is evaluated using system trans- lation alone without any reference translation.
1610.03017#34
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
35
[17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. [18] Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. The treasure beneath con- volutional layers: Cross-convolutional-layer pooling for image classification. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4749–4757, 2015. [19] Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through ffts. arXiv preprint arXiv:1312.5851, 2013. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 413– 420. IEEE, 2009. [21] David Silver, J Andrew Bagnell, and Anthony Stentz. Learning autonomous driving In Experimental Robotics, pages styles and maneuvers from expert demonstration. 371–386. Springer, 2013. [22] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
1610.02850#35
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
35
For DE-EN, we observe that the multilingual char2char and bilingual char2char models are tied with respect to both adequacy and fluency (Ta- ble 6 (b-c)). For CS-EN, the multilingual char2char and bilingual bpe2char models ared tied for ade- quacy. However, the multilingual char2char model yields significantly better fluency (Table 6 (d, f)). For FI-EN and RU-EN, the multilingual char2char model is tied with the bilingual char2char model with respect to adequacy, but significantly outper- forms all other models in fluency (Table 6 (g-i, j-l)). Overall, the improvement in translation quality yielded by the multilingual character-level model mainly comes from fluency. We conjecture that be- cause the English decoder of the multilingual model is tuned on all the training sentence pairs, it becomes (a) Spelling mistakes DE ori DE src EN ref bpe2char char2char Why should we not be friends ? Warum sollten wir nicht Freunde sei ? Warum solltne wir nich Freunde sei ? Why should not we be friends ? Why are we to be friends ? # (b) Rare words
1610.03017#35
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
36
[23] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. [24] Liwei Wang, Chen-Yu Lee, Zhuowen Tu, and Svetlana Lazebnik. Training deeper convolutional networks with deep supervision. arXiv preprint arXiv:1505.02496, 2015. [25] Zhixiang Xu, Kilian Weinberger, and Olivier Chapelle. The greedy miser: Learning under test-time budgets. arXiv preprint arXiv:1206.6451, 2012. [26] Zhixiang Xu, Matt Kusner, Gao Huang, and Kilian Q Weinberger. Anytime repre- sentation learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1076–1084, 2013.
1610.02850#36
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
36
# (b) Rare words # DE src EN ref bpe2char char2char Siebentausendzweihundertvierundf¨unfzig . Seven thousand two hundred fifty four . Fifty-five Decline of the Seventy . Seven thousand hundred thousand fifties . # (c) Morphology Die Zufahrtsstraßen wurden gesperrt , wodurch sich laut CNN lange R¨uckstaus bildeten . The access roads were blocked off , which , according to CNN , caused long tailbacks . The access roads were locked , which , according to CNN , was long back . The access roads were blocked , which looked long backwards , according to CNN . # (d) Nonce words DE src EN ref bpe2char char2char Der Test ist nun ¨uber , aber ich habe keine gute Note . Es ist wie eine Verschlimmbesserung . The test is now over , but i don’t have any good grade . it is like a worsened improvement . The test is now over , but i do not have a good note . The test is now , but i have no good note , it is like a worsening improvement . # (e) Multilingual
1610.03017#36
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.02850
37
[27] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Learning deep representation for face align- ment with auxiliary attributes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(5):918–930, May 2016. ISSN 0162-8828. doi: 10.1109/TPAMI.2015. 2469286. [28] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. In Advances in Learning deep features for scene recognition using places database. neural information processing systems, pages 487–495, 2014.
1610.02850#37
Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets
We propose Impatient Deep Neural Networks (DNNs) which deal with dynamic time budgets during application. They allow for individual budgets given a priori for each test example and for anytime prediction, i.e., a possible interruption at multiple stages during inference while still providing output estimates. Our approach can therefore tackle the computational costs and energy demands of DNNs in an adaptive manner, a property essential for real-time applications. Our Impatient DNNs are based on a new general framework of learning dynamic budget predictors using risk minimization, which can be applied to current DNN architectures by adding early prediction and additional loss layers. A key aspect of our method is that all of the intermediate predictors are learned jointly. In experiments, we evaluate our approach for different budget distributions, architectures, and datasets. Our results show a significant gain in expected accuracy compared to common baselines.
http://arxiv.org/pdf/1610.02850
Manuel Amthor, Erik Rodner, Joachim Denzler
cs.CV
British Machine Vision Conference (BMVC) 2016
null
cs.CV
20161010
20161010
[ { "id": "1502.03167" }, { "id": "1604.01685" }, { "id": "1601.07576" }, { "id": "1506.02515" }, { "id": "1505.02496" }, { "id": "1504.00702" } ]
1610.03017
37
# (e) Multilingual Bei der Metropolitn´ıho v´yboru pro dopravu f¨ur das Gebiet der San Francisco Bay erkl¨arten Beamte , der Kon- gress k¨onne das Problem банкротство доверительного Фонда строительства шоссейных дорог einfach durch Erh¨ohung der Kraftstoffsteuer l¨osen . At the Metropolitan Transportation Commission in the San Francisco Bay Area , officials say Congress could very simply deal with the bankrupt Highway Trust Fund by raising gas taxes . During the Metropolitan Committee on Transport for San Francisco Bay , officials declared that Congress could solve the problem of bankruptcy by increasing the fuel tax bankrupt . At the Metropolitan Committee on Transport for the territory of San Francisco Bay , officials explained that the Congress could simply solve the problem of the bankruptcy of the Road Construction Fund by increasing the fuel tax .
1610.03017#37
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
38
Table 7: Sample translations. For each example, we show the source sentence as src, the human translation as ref, and the translations from the subword-level baseline and our character-level model as bpe2char and char2char, re- spectively. For (a), the original, uncorrupted source sentence is also shown (ori). The source sentence in (e) contains words in German (in green), Czech (in yellow) and Russian (in blue). The translations in (a-d) are from the bilingual models, whereas those in (e) are from the multilingual models. a better language model than a bilingual model’s de- coder. We leave it for future work to confirm if this is indeed the case. sample translations from the character-level model with those from the subword-level model, which al- ready sidesteps some of the issues associated with word-level translation. # 7 Qualitative Analysis In Table 7, we demonstrate our character-level model’s robustness in four translation scenarios that conventional NMT systems are known to suffer in. We also showcase our model’s ability to seamlessly handle intra-sentence code-switching, or mixed ut- terances from two or more languages. We compare
1610.03017#38
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
39
With real-world text containing typos and spelling mistakes, the quality of word-based translation would severely drop, as every non-canonical form of a word cannot be represented. On the other hand, a character-level model has a much better chance recovering the original word or sentence. Indeed, our char2char model is robust against a few spelling mistakes (Table 7 (a)). Given a long, rare word such as “Sieben- tausendzweihundertvierundf¨unfzig” (seven thou- sand two hundred fifty four) in Table 7 (b), the subword-level model segments “Siebentausend” as (Sieb, ent, aus, end), which results in an inaccurate translation. The character-level model performs bet- ter on these long, concatenative words with ambigu- ous segmentation. Also, we expect a character-level model to han- dle novel and unseen morphological inflections well. We observe that this is indeed the case, as our char2char model correctly understands “gesperrt”, a past participle form of “sperren” (to block) (Ta- ble 7 (c)).
1610.03017#39
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
40
Nonce words are terms coined for a single use. They are not actual words but are constructed in a way that humans can intuitively guess what they mean, such as workoliday and friyay. We construct a few DE-EN sentence pairs that contain German nonce words (one example shown in Table 7 (d)), and observe that the character-level model can in- deed detect salient character patterns and arrive at a correct translation. Finally, we evaluate our multilingual models’ ca- pacity to perform intra-sentence code-switching, by giving them as input mixed sentences from multiple languages. The newstest-2013 development datasets for DE-EN, CS-EN and FI-EN contain intersecting examples with the same English sentences. We com- pile a list of these sentences in DE/CS/FI and their translation in EN, and choose a few samples uni- formly at random from the English side. Words or clauses from different languages are manually inter- mixed to create multilingual sentences.
1610.03017#40
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
41
We discover that when given sentences with high degree of language intermixing, as in Table 7 (e), the multilingual bpe2char model fails to seamlessly handle alternation of languages. Overall, however, both multilingual models generate reasonable trans- lations. This is possible because we did not provide a language identifier when training our multilingual models; as a result, they learned to understand a multilingual sentence and translate it into a coherent English sentence. We show supplementary sample translations in each scenario on a webpage.4 4https://sites.google.com/site/dl4mtc2c Training and decoding speed On a single Titan X GPU, we observe that our char2char models are ap- proximately 35% slower to train than our bpe2char baselines when the same batch size was used. Our bilingual character-level models can be trained in roughly two weeks. We further note that the bilingual bpe2char model can translate 3,000 sentences in 66.63 minutes while the bilingual char2char model requires 71.71 minutes (online, not in batch). See Table 8 for the exact details.
1610.03017#41
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
42
Model Time to execute 1k updates (s) Batch size Time to decode 3k sentences (m) bpe2char char2char 2461.72 2371.93 128 64 66.63 71.71 Multi bpe2char char2char 1646.37 2514.23 64 64 68.99 72.33 Table 8: Speed comparison. The second column shows the time taken to execute 1,000 training updates. The model makes each update after having seen one mini- batch. Further observations We also note that the mul- tilingual models are less prone to overfitting than the bilingual models. This is particularly visible for low-resource language pairs such as FI-EN. Figure 2 shows the evolution of the FI-EN validation BLEU scores where the bilingual models overfit rapidly but the multilingual models seem to regularize learning by training simultaneously on other language pairs. BLEU on FI-EN newstest-2013 mene o 15 ~ — qeaseessueeeeeees® a aaonetanas so aaa rine yo a’ . 2 sms bi-bpe2char J mee bi-char2char ae multi-bpe2char multi-char2char 10) —- 500 7500 2000 7000 Number of updates (k) Figure 2: Multilingual models overfit less than bilingual models on low-resource language pairs. # 8 Conclusion
1610.03017#42
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
43
Figure 2: Multilingual models overfit less than bilingual models on low-resource language pairs. # 8 Conclusion We propose a fully character-level NMT model that accepts a sequence of characters in the source lan- guage and outputs a sequence of characters in the target language. What is remarkable about this model is the absence of explicitly hard-coded knowl- edge of words and their boundaries, and that the model learns these concepts from a translation task alone. the fully character-level model performs as well as, or bet- ter than, subword-level translation models. The per- formance gain is distinctly pronounced in the multi- lingual many-to-one translation task, where results show that character-level model can assign model capacities to different languages more efficiently than the subword-level models. We observe a partic- ularly large improvement in FI-EN translation when the model is trained to translate multiple languages, indicating positive cross-lingual transfer to a low- resource language pair.
1610.03017#43
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
44
We discover two main benefits of the multilingual character-level model: (1) it is much more param- eter efficient than the bilingual models and (2) it can naturally handle intra-sentence code-switching as a result of the many-to-one translation task. Ul- timately, we present a case for fully character-level translation: that translation at the level of charac- ter is strongly beneficial and should be encouraged more. The repository https://github.com/nyu-dl /dl4mt-c2c contains the source code and pre- trained models for reproducing the experimental re- sults. In the next stage of this research, we will investi- gate extending our multilingual many-to-one trans- lation models to perform many-to-many translation, which will allow the decoder, similarly with the en- coder, to learn from multiple target languages. Fur- thermore, a more thorough investigation into model architectures and hyperparameters is needed. # Acknowledgements
1610.03017#44
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
45
# Acknowledgements KC thanks the support by eBay, Facebook, Google (Google Faculty Award 2016) and NVidia (NVIDIA AI Lab 2016-2019). This work was partly sup- ported by Samsung Advanced Institute of Technology (Deep Learning). JL was supported by Qual- comm Innovation Fellowship, and thanks David Yenicelik and Kevin Wallimann for their contribu- tion in designing the qualitative analysis. The au- thors would like to thank Prof. Zheng Zhang (NYU Shanghai) for fruitful discussion and comments, as well as Yvette Graham for her help with the human evaluation. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- 2015. Neural machine translation by jointly gio. In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR).
1610.03017#45
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
46
Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using amazon’s mechan- ical turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder In Proceedings of the 8th Workshop on approaches. Syntax, Semantics, and Structure in Statistical Trans- lation, page 103. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine transla- tion. In Proceedings of the Empiricial Methods in Nat- ural Language Processing. Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit seg- mentation for neural machine translation. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics.
1610.03017#46
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]
1610.03017
47
Marta R. Costa-Juss´a and Jos`e A. R. Fonollosa. 2016. In Pro- Character-based neural machine translation. ceedings of the 54th Annual Meeting of the Association for Computational Linguistics, page 357. Ferdinand de Saussure. 1916. Course in General Lin- guistics. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine trans- lation with a shared attention mechanism. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics. Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing In Proceedings of the 2016 Conference from bytes. of the North American Chapter of the Association for Computational Linguistics.
1610.03017#47
Fully Character-Level Neural Machine Translation without Explicit Segmentation
Most existing machine translation systems operate at the level of words, relying on explicit segmentation to extract tokens. We introduce a neural machine translation (NMT) model that maps a source character sequence to a target character sequence without any segmentation. We employ a character-level convolutional network with max-pooling at the encoder to reduce the length of source representation, allowing the model to be trained at a speed comparable to subword-level models while capturing local regularities. Our character-to-character model outperforms a recently proposed baseline with a subword-level encoder on WMT'15 DE-EN and CS-EN, and gives comparable performance on FI-EN and RU-EN. We then demonstrate that it is possible to share a single character-level encoder across multiple languages by training a model on a many-to-one translation task. In this multilingual setting, the character-level encoder significantly outperforms the subword-level encoder on all the language pairs. We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
http://arxiv.org/pdf/1610.03017
Jason Lee, Kyunghyun Cho, Thomas Hofmann
cs.CL, cs.LG
Transactions of the Association for Computational Linguistics (TACL), 2017
null
cs.CL
20161010
20170613
[ { "id": "1602.00367" }, { "id": "1609.08144" }, { "id": "1511.04586" } ]