doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1704.08795
66
. i=l j=l (i) where m is the length of the execution as j is the agent context at step 7 in sample 7, and al is the demonstration action of step j in demonstra- tion execution é“), Agent contexts are generated with the annotated previous actions (i.e., to gener- ate previous images and the previous action). We use minibatch gradient descent with ADAM up- dates (Kingma and Ba, 2014). DQN We use deep Q-learning (Mnih et al., 2015) to train a Q-network. We use the architec- ture described in Section 4, except replacing the task specific part with a single 81-dimension layer. In contrast to our probabilistic model, we do not decompose block and direction selection. We use the shaped reward function, including both F| and Fy. We use a replay memory of size 2,000 and an e-greedy behavior policy to generate rollouts. We attenuate the value of € from 1 to 0.1 in 100,000 steps and use prioritized sweeping for sampling. We also use a target network that is synchronized after every epoch. REINFORCE We use the REINFORCE al- gorithm (Sutton et al., 1999) to train our agent. REINFORCE performs
1704.08795#66
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
67
after every epoch. REINFORCE We use the REINFORCE al- gorithm (Sutton et al., 1999) to train our agent. REINFORCE performs policy gradient learning with total reward accumulated over the roll-out as opposed to using immediate rewards as in our main approach. REINFORCE samples the total reward using monte-carlo sampling by performing a roll-out. We use the shaped reward function, in- cluding both F, and Fy terms. Similar to our ap- proach, we initialize with a SUPERVISED model and regularize the objective with the entropy of the policy. We do not use a reward baseline. SUPERVISED with Oracle Planner We use a variant of our model assuming a perfect planner. The model predicts the block to move and its tar- get position as a pair of coordinates. We modify the architecture in Section 4 to predict the block to move and its target position as a pair of coordi- nates. This model assumes that the sequence of ac- tions is inferred from the predicted target position
1704.08795#67
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
68
using an oracle planner. We train using supervised learning by maximizing the likelihood of the block being moved and minimizing the squared distance between the predicted target position and the an- notated target position. # C Parameters and Initialization # C.1 Architecture Parameters We use an RGB image of 120x120 pixels, and a convolutional neural network (CNN) with 4 lay- ers. The first two layers apply 32 8 × 8 filters with a stride of 4, the third applies 32 4 × 4 fil- ters with a stride of 2. The last layer performs an affine transformation to create a 200-dimension vector. We linearly scale all images to have zero mean and unit norm. We use a single layer RNN with 150-dimensional word embeddings and 250 LSTM units. The dimension of the action em- bedding ψa is 56, including 32 for embedding the block and 24 for embedding the directions. W(1) is a 506 × 120 matrix and b(1) is a 120-dimension vector. W(D) is 120×20 for 20 blocks, and W(B) is 120×5 for the four directions (north, south, east, west) and the STOP action. We consider K = 4 previous images, and use horizon length J = 40. # C.2 Initialization
1704.08795#68
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
69
# C.2 Initialization Embedding matrices are initialized with a zero- mean unit-variance Gaussian distribution. All bi- ases are initialized to 0. We use a zero-mean trun- cated normal distribution to initialize the CNN fil- ters (0.005 variance) and CNN weights matrices (0.004 variance). All other weight matrices are initialized with a normal distribution (mean=0.0, standard deviation=0.01). The matrices used in the word embedding function ψ are initialized with a zero-mean normal distribution with stan- dard deviation of 1.0. Action embedding matrices, which are used for ψa, are initialized with a zero- mean normal distribution with 0.001 standard de- viation. We initialize policy gradient learning, in- cluding our approach, with parameters estimated using supervised learning for two epochs, except the direction parameters W(D) and b(D), which we learn from scratch. We found this initializa- tion method to provide a good balance between strong initialization and not biasing the learning too much, which can result in limited exploration. # C.3 Learning Parameters We use the distance error on a small validation set as stopping criteria. After each epoch, we save
1704.08795#69
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
70
# C.3 Learning Parameters We use the distance error on a small validation set as stopping criteria. After each epoch, we save the model, and select the final model based on de- velopment set performance. While this method overfits the development set, we found it more re- liable then using the small validation set alone. Our relatively modest performance degradation on the held-out set illustrates that our models general- ize well. We set the reward and shaping penalties δ = δf = 0.02. The entropy regularization coef- ficient is λ = 0.1. The learning rate is µ = 0.001 for supervised learning and µ = 0.00025 for pol- icy gradient. We clip the gradient at a norm of 5.0. All learning algorithms use a mini-batch of size 32 during training. # D Dataset Comparisons
1704.08795#70
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
71
# D Dataset Comparisons We briefly review instruction following datasets in Table 4, including: Blocks (Bisk et al., 2016), SAIL (MacMahon et al., 2006; Chen and Mooney, 2011), Matuszek (Matuszek et al., 2012), and Misra (Misra et al., 2015). Overall, Blocks pro- vides the largest training set and a relatively com- plex environment with well over 2.4318 possible states.8 The most similar dataset is SAIL, which provides only partial observability of the environ- ment (i.e., the agent observes what is around it only). However, SAIL is less complex on other dimensions related to the instructions, trajectories, and action space. In addition, while Blocks has a large number of possible states, SAIL includes only 400 states. The small number of states makes it difficult to learn vision models that generalize well. Misra (Misra et al., 2015) provides a param- eterized action space (e.g., grasp(cup)), which leads to a large number of potential actions. How- ever, the corpus is relatively small. # E Common Questions
1704.08795#71
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
72
# E Common Questions This is a list of potential questions following var- ious decisions that we made. While we ablated and discussed all the crucial decisions in the paper, we decided to include this appendix to provide as much information as possible. Is it possible to manually engineer a competi- tive reward function without shaping? Shap- ing is a principled approach to add information to a problem reward with relatively intuitive potential functions. Our experiments demonstrate its effec- tiveness. Investing engineering effort in designing 8We compute this loose lower bound on the number of states in the block world as 20! = 2.4318 (the number of block permutations). This is a very loose lower bound. Name Blocks SAIL Matuszek Misra # Samples Vocabulary Mean Instruction 16,767 3,237 217 469 Size 1,426 563 39 775 Length 15.27 7.96 6.65 48.7 # Actions Mean Trajectory 81 3 3 > 100 Length 15.4 3.12 N/A 21.5 Partially Observed No Yes No No Table 4: Comparison of several related natural language instructions corpora. a reward function specifically designed to the task is a potential alternative approach.
1704.08795#72
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
73
Table 4: Comparison of several related natural language instructions corpora. a reward function specifically designed to the task is a potential alternative approach. Are you using beam search? Why not? While using beam search can probably increase our per- formance, we chose to avoid it. We are motivated by robotic scenarios, where implementing beam search is a challenging task and often not possible. We distinguish between beam search and back- tracking. Beam search is also incompatible with common assumptions of reinforcement learning, although it is often used during test with reinforce- ment learning systems. Why are you using the mean of the LSTM hidden states instead of just the final state? We empirically tested both options. Using the mean worked better. This was also observed by Narasimhan et al. (2015). Understanding in which scenarios one technique is better than the other is an important question for future work. Can you provide more details about initializa- tion? Please see Appendix C.
1704.08795#73
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
74
Can you provide more details about initializa- tion? Please see Appendix C. Does the agent in the block world learn to move obstacles and other blocks? While the agent can move any block at any step, in practice, it rarely happens. The agent prefers to move blocks around obstacles rather than moving other blocks and moving them back into place afterwards. This behavior is learned from the data and shows even when we use only very limited amount of demon- strations. We hypothesize that in other tasks the agent is likely to learn that moving obstacles is advantageous, for example when demonstrations include moving obstacles. Does the agent know which blocks are present? Not all blocks are included in each task. The agent must infer which blocks are present from the im- age and instruction. The set of possible actions, which includes moving all possible blocks, does not change between tasks. If the agent chooses to move a block that is not present, the world state does not change.
1704.08795#74
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
75
Did you experiment with executing sequences of instruction? The Bisk et al. (2016) includes such instructions, right? The majority of exist- ing corpora, including SAIL (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Mei et al., 2016), provide segmented sequences of instruc- tions. Existing approaches take advantage of this segmentation during training. For example, Chen and Mooney (2011), Artzi and Zettlemoyer (2013), and Mei et al. (2016) all train on seg- mented data and test on sequences of instructions by doing inference on one sentence at a time. We are also able to do this. Similar to these ap- proaches, we will likely suffer from cascading er- rors. The multi-instruction paragraphs in the Bisk et al. (2016) data are an open problem and present new challenges beyond just instruction length. For example, they often merge multiple block place- ments in one instruction (e.g, put the SRI, HP, and Dell blocks in a row). Since the original corpus does not provide trajectories and our automatic generation procedure is not able to resolve which block to move first, we do not have demonstra- tions
1704.08795#75
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
77
Does the agent explicitly mark where it is in the instruction? We estimate that over 90% of the instructions describe the target position. There- fore, it is often not clear how much of the in- struction was completed during the execution. The agent does not have an explicit mechanism to mark portions of the instruction that are complete. We briefly experimented with attention, but found that empirically it does not help in our domain. De- signing an architecture to allows such considera- tions is an important direction for future work. Potential-based shaping was proven to be safe when maximizing the total expected reward. Does this apply for the contextual bandit set- ting, where you maximize the immediate re- ward? The safe shaping theorems (Appendix A) do not hold in our contextual bandit setting. We show empirically that shaping works in practice. However, how and if it changes the order of poli- cies is an open question.
1704.08795#77
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.08795
78
How long does it take to train? How many frames the agent observes? The agent observes about 2.5 million frames. It takes 16 hours using 50% capacity of an Nvidia Pascal Titan X GPU to train using our approach. DQN takes more than twice the time for the same number of epochs. Su- pervised learning takes about 9 hours to converge. We also trained DQN for around four days, but did not observe improvement. Did you consider initializing DQN with super- vised learning? Initializing DQN with the prob- abilistic supervised model is challenging. Since DQN is not probabilistic it is not clear what this initialization means. Smart initialization of DQN is an important problem for future work.
1704.08795#78
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
http://arxiv.org/pdf/1704.08795
Dipendra Misra, John Langford, Yoav Artzi
cs.CL
In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2017
null
cs.CL
20170428
20170722
[]
1704.07813
0
7 1 0 2 g u A 1 ] V C . s c [ 2 v 3 1 8 7 0 . 4 0 7 1 : v i X r a # Unsupervised Learning of Depth and Ego-Motion from Video # Tinghui Zhou∗ UC Berkeley # Matthew Brown Google # Noah Snavely Google # David G. Lowe Google # Abstract We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with re- cent work [10, 14, 16], we use an end-to-end learning ap- In proach with view synthesis as the supervisory signal. contrast to the previous work, our method is completely un- supervised, requiring only monocular video sequences for training. Our method uses single-view depth and multi- view pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical eval- uation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs fa- vorably compared to established SLAM systems under com- parable input settings.
1704.07813#0
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
1
(a) Training: unlabeled video clips. Target view Depth CNN \g- ee (b) Testing: single-view depth and multi-view pose estimation. Figure 1. The training data to our system consists solely of un- labeled image sequences capturing scene appearance from differ- ent viewpoints, where the poses of the images are not provided. Our training procedure produces two models that operate inde- pendently, one for single-view depth prediction, and one for multi- view camera pose estimation. # 1. Introduction
1704.07813#1
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
2
# 1. Introduction Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision has failed to recreate similar modeling capabilities for real-world scenes (e.g., where non-rigidity, occlusion and lack of texture are present). So why do humans excel at this task? One hypoth- esis is that we develop a rich, structural understanding of the world through our past visual experience that has largely consisted of moving around and observing vast numbers of scenes and developing consistent modeling of our observa- tions. From millions of such observations, we have learned about the regularities of the world—roads are flat, buildings are straight, cars are supported by roads etc., and we can apply this knowledge when perceiving a new scene, even from a single monocular image. ∗The majority of the work was done while interning at Google.
1704.07813#2
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
3
∗The majority of the work was done while interning at Google. In this work, we mimic this approach by training a model that observes sequences of images and aims to explain its observations by predicting likely camera motion and the scene structure (as shown in Fig. 1). We take an end-to- end approach in allowing the model to map directly from input pixels to an estimate of ego-motion (parameterized as 6-DoF transformation matrices) and the underlying scene structure (parameterized as per-pixel depth maps under a reference view). We are particularly inspired by prior work that has suggested view synthesis as a metric [44] and recent work that tackles the calibrated, multi-view 3D case in an end-to-end framework [10]. Our method is unsupervised, and can be trained simply using sequences of images with no manual labeling or even camera motion information. Our approach builds upon the insight that a geomet- ric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground1
1704.07813#3
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
4
Our approach builds upon the insight that a geomet- ric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground1 truth. While imperfect geometry and/or pose estimation can cheat with reasonable synthesized views for certain types of scenes (e.g., textureless), the same model would fail miserably when presented with another set of scenes with more diverse layout and appearance structures. Thus, our goal is to formulate the entire view synthesis pipeline as the inference procedure of a convolutional neural net- work, so that by training the network on large-scale video data for the ‘meta’-task of view synthesis the network is forced to learn about intermediate tasks of depth and cam- era pose estimation in order to come up with a consistent explanation of the visual world. Empirical evaluation on the KITTI [15] benchmark demonstrates the effectiveness of our approach on both single-view depth and camera pose estimation. Our code will be made available at https: //github.com/tinghuiz/SfMLearner. # 2. Related work
1704.07813#4
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
5
# 2. Related work Structure from motion The simultaneous estimation of structure and motion is a well studied problem with an estab- lished toolchain of techniques [12, 50, 38]. Whilst the traditional toolchain is effective and efficient in many cases, its reliance on ac- curate image correspondence can cause problems in areas of low texture, complex geometry/photometry, thin structures, and occlu- sions. To address these issues, several of the pipeline stages have been recently tackled using deep learning, e.g., feature match- ing [18], pose estimation [26], and stereo [10, 27, 53]. These learning-based techniques are attractive in that they are able to leverage external supervision during training, and potentially over- come the above issues when applied to test data.
1704.07813#5
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
6
Warping-based view synthesis One important application of geometric scene understanding is the task of novel view syn- thesis, where the goal is to synthesize the appearance of the scene seen from novel camera viewpoints. A classic paradigm for view synthesis is to first either estimate the underlying 3D geometry explicitly or establish pixel correspondence among input views, and then synthesize the novel views by compositing image patches from the input views (e.g., [4, 55, 43, 6, 9]). Recently, end-to- end learning has been applied to reconstruct novel views by trans- forming the input based on depth or flow, e.g., DeepStereo [10], Deep3D [51] and Appearance Flows [54]. In these methods, the underlying geometry is represented by quantized depth planes (DeepStereo), probabilistic disparity maps (Deep3D) and view- dependent flow fields (Appearance Flows), respectively. Unlike methods that directly map from input views to the target view (e.g., [45]), warping-based methods are forced to learn intermedi- ate predictions of geometry and/or correspondence. In this work, we aim to distill such geometric reasoning capability from CNNs trained to perform warping-based view synthesis.
1704.07813#6
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
7
Learning single-view 3D from registered 2D views Our work is closely related to a line of recent research on learning single-view 3D inference from registered 2D observations. Garg et al. [14] propose to learn a single-view depth estimation CNN us- ing projection errors to a calibrated stereo twin for supervision. Concurrently, Deep3D [51] predicts a second stereo viewpoint from an input image using stereoscopic film footage as training data. A similar approach was taken by Godard et al. [16], with the addition of a left-right consistency constraint, and a better ar- chitecture design that led to impressive performance. Like our approach, these techniques only learn from image observations of the world, unlike methods that require explicit depth for training, e.g., [20, 42, 7, 27, 30].
1704.07813#7
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
8
These techniques bear some resemblance to direct methods for structure and motion estimation [22], where the camera parame- ters and scene depth are adjusted to minimize a pixel-based error function. However, rather than directly minimizing the error to obtain the estimation, the CNN-based methods only take a gradi- ent step for each batch of input instances, which allows the net- work to learn an implicit prior from a large corpus of related im- agery. Several authors have explored building differentiable ren- dering operations into their models that are trained in this way, e.g., [19, 29, 34]. While most of the above techniques (including ours) are mainly focused on inferring depth maps as the scene geometry output, re- cent work (e.g., [13, 41, 46, 52]) has also shown success in learn- ing 3D volumetric representations from 2D observations based on similar principles of projective geometry. Fouhey et al. [11] fur- ther show that it is even possible to learn 3D inference without 3D labels (or registered 2D views) by utilizing scene regularity.
1704.07813#8
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
9
Unsupervised/Self-supervised learning from video An- other line of related work to ours is visual representation learning from video, where the general goal is to design pretext tasks for learning generic visual features from video data that can later be re-purposed for other vision tasks such as object detection and se- mantic segmentation. Such pretext tasks include ego-motion esti- mation [2, 24], tracking [49], temporal coherence [17], temporal order verification [36], and object motion mask prediction [39]. While we focus on inferring the explicit scene geometry and ego-motion in this work, intuitively, the internal representation learned by the deep network (especially the single-view depth CNN) should capture some level of semantics that could gener- alize to other tasks as well. Concurrent to our work, Vijayanarasimhan et al. [48] indepen- dently propose a framework for joint training of depth, camera motion and scene motion from videos. While both methods are conceptually similar, ours is focused on the unsupervised aspect, whereas their framework adds the capability to incorporate super- vision (e.g., depth, camera motion or scene motion). There are significant differences in how scene dynamics are modeled during training, in which they explicitly solve for object motion whereas our explainability mask discounts regions undergoing motion, oc- clusion and other factors.
1704.07813#9
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
10
# 3. Approach Here we propose a framework for jointly training a single-view depth CNN and a camera pose estimation CNN from unlabeled video sequences. Despite being jointly trained, the depth model and the pose estimation model can be used independently during test-time inference. Training examples to our model consist of short image sequences of scenes captured by a moving camera. While our training procedure is robust to some degree of scene Depth CNN Figure 2. Overview of the supervision pipeline based on view syn- thesis. The depth network takes only the target view as input, and outputs a per-pixel depth map ˆDt. The pose network takes both the target view (It) and the nearby/source views (e.g., It−1 and It+1) as input, and outputs the relative camera poses ( ˆTt→t−1, ˆTt→t+1). The outputs of both networks are then used to inverse warp the source views (see Sec. 3.2) to reconstruct the target view, and the photometric reconstruction loss is used for training the CNNs. By utilizing view synthesis as supervision, we are able to train the entire framework in an unsupervised manner from videos. motion, we assume that the scenes we are interested in are mostly rigid, i.e., the scene appearance change across different frames is dominated by the camera motion. # 3.1. View synthesis as supervision
1704.07813#10
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
11
# 3.1. View synthesis as supervision The key supervision signal for our depth and pose prediction CNNs comes from the task of novel view synthesis: given one input view of a scene, synthesize a new image of the scene seen from a different camera pose. We can synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view. As we will show next, this synthesis process can be implemented in a fully differentiable manner with CNNs as the geometry and pose estimation modules. Visibility can be handled, along with non-rigidity and other non-modeled factors, using an “explanability” mask, which we discuss later (Sec. 3.3). Let us denote < [1,...,Jy > as a training image sequence with one of the frames J; being the target view and the rest being the source views I,(1 < s < N,s # t). The view synthesis objective can be formulated as Los = >So Ihe) — Lp), () where p indexes over pixel coordinates, and ˆIs is the source view Is warped to the target coordinate frame based on a depth image- based rendering module [8] (described in Sec. 3.2), taking the pre- dicted depth ˆDt, the predicted 4×4 camera transformation matrix1 ˆTt→s and the source view Is as input.
1704.07813#11
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
12
Note that the idea of view synthesis as supervision has also been recently explored for learning single-view depth estima- tion [14, 16] and multi-view stereo [10]. However, to the best of our knowledge, all previous work requires posed image sets dur- ing training (and testing too in the case of DeepStereo), while our 1In practice, the CNN estimates the Euler angles and the 3D translation vector, which are then converted to the transformation matrix. ti I, Project Warp oe . Pt Pt Figure 3. Illustration of the differentiable image warping process. For each point pt in the target view, we first project it onto the source view based on the predicted depth and camera pose, and then use bilinear interpolation to obtain the value of the warped image ˆIs at location pt. framework can be applied to standard videos without pose infor- mation. Furthermore, it predicts the poses as part of the learning framework. See Figure 2 for an illustration of our learning pipeline for depth and pose estimation. # 3.2. Differentiable depth image-based rendering
1704.07813#12
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
13
# 3.2. Differentiable depth image-based rendering As indicated in Eq. 1, a key component of our learning frame- work is a differentiable depth image-based renderer that recon- structs the target view It by sampling pixels from a source view Is based on the predicted depth map ˆDt and the relative pose ˆTt→s. Let pt denote the homogeneous coordinates of a pixel in the target view, and K denote the camera intrinsics matrix. We can obtain pt’s projected coordinates onto the source view ps by2 ps ∼ K ˆTt→s ˆDt(pt)K −1pt (2)
1704.07813#13
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
14
ps ∼ K ˆTt→s ˆDt(pt)K −1pt (2) Notice that the projected coordinates ps are continuous values. To obtain I,(p;) for populating the value of f,(p,) (see Figure 3), we then use the differentiable bilinear sampling mechanism pro- posed in the spatial transformer networks [23] that linearly in- terpolates the values of the 4-pixel neighbors (top-left, top-right, bottom-left, and bottom-right) of ps to approximate I,(ps), i.e. I.(p.) = Is(Ps) = Dieses jeqry W’ La(PY), where w'? is linearly proportional to the spatial proximity between p, and py’ , and Y7,,; wv = 1. A similar strategy is used in [54] for learning to directly warp between different views, while here the coordi- nates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose. # 3.3. Modeling the model limitation
1704.07813#14
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
15
# 3.3. Modeling the model limitation Note that when applied to monocular videos the above view synthesis formulation implicitly assumes 1) the scene is static without moving objects; 2) there is no occlusion/disocclusion be- tween the target view and the source views; 3) the surface is Lam- bertian so that the photo-consistency error is meaningful. If any of these assumptions are violated in a training sequence, the gra- dients could be corrupted and potentially inhibit training. To im- prove the robustness of our learning pipeline to these factors, we additionally train a explainability prediction network (jointly and simultaneously with the depth and pose networks) that outputs a per-pixel soft mask ˆEs for each target-source pair, indicating the 2For notation simplicity, we omit showing the necessary conversion to homogeneous coordinates along the steps of matrix multiplication. (5 Input | Conv [EG Deconv —> Concat ----» Upsample + Concat ——» Prediction (a) Sing (b) Pose/explainability network le-view depth network
1704.07813#15
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
16
Figure 4. Network architecture for our depth/pose/explainability prediction modules. The width and height of each rectangular block indi- cates the output channels and the spatial dimension of the feature map at the corresponding layer respectively, and each reduction/increase in size indicates a change by the factor of 2. (a) For single-view depth, we adopt the DispNet [35] architecture with multi-scale side pre- dictions. The kernel size is 3 for all the layers except for the first 4 conv layers with 7, 7, 5, 5, respectively. The number of output channels for the first conv layer is 32. (b) The pose and explainabilty networks share the first few conv layers, and then branch out to predict 6-DoF relative pose and multi-scale explainability masks, respectively. The number of output channels for the first conv layer is 16, and the kernel size is 3 for all the layers except for the first two conv and the last two deconv/prediction layers where we use 7, 5, 5, 7, respectively. See Section 3.5 for more details. network’s belief in where direct view synthesis will be success- fully modeled for each target pixel. Based on the predicted ˆEs, the view synthesis objective is weighted correspondingly by
1704.07813#16
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
17
network’s belief in where direct view synthesis will be success- fully modeled for each target pixel. Based on the predicted ˆEs, the view synthesis objective is weighted correspondingly by Lvs = ˆEs(p)|It(p) − ˆIs(p)| . <I1,...,IN >∈S p (3) explicit multi-scale and smoothness loss (e.g., as in [14, 16]) that allows gradients to be derived from larger spatial regions directly. We adopt the second strategy in this work as it is less sensitive to architectural choices. For smoothness, we minimize the L1 norm of the second-order gradients for the predicted depth maps (similar to [48]). Since we do not have direct supervision for ˆEs, training with the above loss would result in a trivial solution of the network always predicting ˆEs to be zero, which perfectly minimizes the loss. To resolve this, we add a regularization term Lreg( ˆEs) that encour- ages nonzero predictions by minimizing the cross-entropy loss with constant label 1 at each pixel location. In other words, the network is encouraged to minimize the view synthesis objective, but allowed a certain amount of slack for discounting the factors not considered by the model. Our final objective becomes
1704.07813#17
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
18
Our final objective becomes Lyinat = Y> Los + AsLimooth + re Y Lrea(Bs), (4) l s where l indexes over different image scales, s indexes over source images, and λs and λe are the weighting for the depth smoothness loss and the explainability regularization, respectively. # 3.5. Network architecture # 3.4. Overcoming the gradient locality One remaining issue with the above learning pipeline is that the gradients are mainly derived from the pixel intensity difference be- tween I(pt) and the four neighbors of I(ps), which would inhibit training if the correct ps (projected using the ground-truth depth and pose) is located in a low-texture region or far from the current estimation. This is a well known issue in motion estimation [3]. Empirically, we found two strategies to be effective for overcom- ing this issue: 1) using a convolutional encoder-decoder architec- ture with a small bottleneck for the depth network that implicitly constrains the output to be globally smooth and facilitates gradi- ents to propagate from meaningful regions to nearby regions; 2)
1704.07813#18
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
19
Single-view depth For single-view depth prediction, we adopt the DispNet architecture proposed in [35] that is mainly based on an encoder-decoder design with skip connections and multi-scale side predictions (see Figure 4). All conv layers are followed by ReLU activation except for the prediction layers, where we use 1/(α ∗ sigmoid(x) + β) with α = 10 and β = 0.01 to con- strain the predicted depth to be always positive within a reason- able range. We also experimented with using multiple views as input to the depth network, but did not find this to improve the results. This is in line with the observations in [47], where opti- cal flow constraints need to be enforced to utilize multiple views effectively. Pose The input to the pose estimation network is the target view concatenated with all the source views (along the color channels), and the outputs are the relative poses between the target view and each of the source views. The network consists of 7 stride-2 con- volutions followed by a 1 × 1 convolution with 6 ∗ (N − 1) output channels (corresponding to 3 Euler angles and 3-D translation for each source view). Finally, global average pooling is applied to aggregate predictions at all spatial locations. All conv layers are followed by ReLU except for the last layer where no nonlinear activation is applied.
1704.07813#19
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
20
Explainability mask The explainability prediction network shares the first five feature encoding layers with the pose network, followed by 5 deconvolution layers with multi-scale side predic- tions. All conv/deconv layers are followed by ReLU except for the prediction layers with no nonlinear activation. The number of output channels for each prediction layer is 2 ∗ (N − 1), with ev- ery two channels normalized by softmax to obtain the explainabil- ity prediction for the corresponding source-target pair (the second channel after normalization is ˆEs and used in computing the loss in Eq. 3). # 4. Experiments Here we evaluate the performance of our system, and compare with prior approaches on single-view depth as well as ego-motion estimation. We mainly use the KITTI dataset [15] for benchmark- ing, but also use the Make3D dataset [42] for evaluating cross- dataset generalization ability.
1704.07813#20
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
21
Training Details We implemented the system using the pub- licly available TensorFlow [1] framework. For all the experiments, we set λs = 0.5/l (l is the downscaling factor for the correspond- ing scale) and λe = 0.2. During training, we used batch normal- ization [21] for all the layers except for the output layers, and the Adam [28] optimizer with β1 = 0.9, β2 = 0.999, learning rate of 0.0002 and mini-batch size of 4. The training typically converges after about 150K iterations. All the experiments are performed with image sequences captured with a monocular camera. We re- size the images to 128 × 416 during training, but both the depth and pose networks can be run fully-convolutionally for images of arbitrary size at test time. # 4.1. Single-view depth estimation
1704.07813#21
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
22
# 4.1. Single-view depth estimation We train our system on the split provided by [7], and exclude all the frames from the testing scenes as well as static sequences with mean optical flow magnitude less than 1 pixel for training. We fix the length of image sequences to be 3 frames, and treat the central frame as the target view and the ±1 frames as the source views. We use images captured by both color cameras, but treated them independently when forming training sequences. This results in a total of 44, 540 sequences, out of which we use 40, 109 for training and 4, 431 for validation. To the best of our knowledge, no previous systems exist that learn single-view depth estimation in an unsupervised manner from monocular videos. Nonetheless, here we provide comparison with prior methods with depth supervision [7] and recent methods that use calibrated stereo images (i.e. with pose supervision) for Input image Our prediction Figure 5. Our sample predictions on the Cityscapes dataset using the model trained on Cityscapes only. training [14, 16]. Since the depth predicted by our method is de- fined up to a scale factor, for evaluation we multiply the predicted depth maps by a scalar ˆs that matches the median with the ground- truth, i.e. ˆs = median(Dgt)/median(Dpred).
1704.07813#22
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
23
Similar to [16], we also experimented with first pre-training the system on the larger Cityscapes dataset [5] (sample predictions are shown in Figure 5), and then fine-tune on KITTI, which results in slight performance improvement. KITTI Here we evaluate the single-view depth performance on the 697 images from the test split of [7]. As shown in Table 1, our unsupervised method performs comparably with several su- pervised methods (e.g. Eigen et al. [7] and Garg et al. [14]), but falls short of concurrent work by Godard et al. [16] that uses cal- ibrated stereo images (i.e. with pose supervision) with left-right cycle consistency loss for training. For future work, it would be in- teresting to see if incorporating the similar cycle consistency loss into our framework could further improve the results. Figure 6 provides examples of visual comparison between our results and some supervised baselines over a variety of examples. One can see that although trained in an unsupervised manner, our results are comparable to that of the supervised baselines, and sometimes preserve the depth boundaries and thin structures such as trees and street lights better.
1704.07813#23
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
24
We show sample predictions made by our initial Cityscapes model and the final model (pre-trained on Cityscapes and then fine-tuned on KITTI) in Figure 7. Due to the domain gap between the two datasets, our Cityscapes model sometimes has difficulty in recovering the complete shape of the car/bushes, and mistakes them with distant objects. We also performed an ablation study of the explainability mod- eling (see Table 1), which turns out only offering a modest per- formance boost. This is likely because 1) most of the KITTI scenes are static without significant scene motions, and 2) the oc- clusion/visibility effects only occur in small regions in sequences Input Ground-truth Eigen et al. (depth sup.) Garg ef al. (pose sup.) Ours (unsupervised) a { Figure 6. Comparison of single-view depth estimation between Eigen et al. [7] (with ground-truth depth supervision), Garg et al. [14] (with ground-truth pose supervision), and ours (unsupervised). The ground-truth depth map is interpolated from sparse measurements for visualization purpose. The last two rows show typical failure cases of our model, which sometimes struggles in vast open scenes and objects close to the front of the camera.
1704.07813#24
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
25
across a short time span (3-frames), which make the explainabil- ity modeling less essential to the success of training. Nonetheless, our explainability prediction network does seem to capture the fac- tors like scene motion and visibility well (see Sec. 4.3), and could potentially be more important for other more challenging datasets. Make3D To evaluate the generalization ability of our single- view depth model, we directly apply our model trained on Cityscapes + KITTI to the Make3D dataset unseen during train- ing. While there still remains a significant performance gap be- tween our method and others supervised using Make3D ground- truth depth (see Table 2), our predictions are able to capture the global scene layout reasonably well without any training on the Make3D images (see Figure 8). # 4.2. Pose estimation
1704.07813#25
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
26
# 4.2. Pose estimation To evaluate the performance of our pose estimation network, we applied our system to the official KITTI odometry split (con- taining 11 driving sequences with ground truth odometry obtained through the IMU/GPS readings, which we use for evaluation pur- pose only), and used sequences 00-08 for training and 09-10 for testing. In this experiment, we fix the length of input image se- quences to our system to 5 frames. We compare our ego-motion estimation with two variants of monocular ORB-SLAM [37] (a well-established SLAM system): 1) ORB-SLAM (full), which recovers odometry using all frames of the driving sequence (i.e. allowing loop closure and re-localization), and 2) ORB-SLAM (short), which runs on 5-frame snippets (same as our input setting). Another baseline we compare with is the dataset mean
1704.07813#26
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
27
Method Dataset — Supervision Error metric Accuracy metric Depth Pose AbsRel SqRel RMSE RMSElog 6 <1.25 5 <1.25% 5 < 1.25% Train set mean K v 0.403 5.530 8.709 0.403 0.593 0.776 0.878 Eigen et al. [7] Coarse K v 0.214 1.605 6.563 0.292 0.673 0.884 0.957 Eigen et al. [7] Fine K v 0.203 1.548 6.307 0.282 0.702 0.890 0.958 Liu et al. [32] K v 0.202 1.614 6.523 0.275 0.678 0.895 0.965 Godard et al. [16] K v 0.148 1.344 5.927 0.247 0.803 0.922 0.964 Godard et al. [16] CS+K v 0.124 1.076 5.311 0.219 0.847 0.942 0.973 Ours (w/o explainability) K 0.221 2.226 = 7.527 0.294 0.676 0.885 0.954 Ours K 0.208 1.768 6.856 0.283 0.678 0.885
1704.07813#27
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
28
= 7.527 0.294 0.676 0.885 0.954 Ours K 0.208 1.768 6.856 0.283 0.678 0.885 0.957 Ours cs 0.267 2.686 7.580 0.334 0.577 0.840 0.937 Ours CS+K 0.198 1.836 6.565 0.275 0.718 0.901 0.960 Garg et al. [14] cap 50m K v 0.169 1.080 5.104 0.273 0.740 0.904 0.962 Ours (w/o explainability) cap 50m. K 0.208 1.551 5.452 0.273 0.695 0.900 0.964 Ours cap 50m K 0.201 1.391 5.181 0.264 0.696 0.900 0.966 Ours cap 50m cs 0.260 2.232 6.148 0.321 0.590 0.852 0.945 Ours cap 50m CS+K 0.190 1.436 4.975 0.258 0.735 0.915 0.968
1704.07813#28
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
29
Table 1. Single-view depth results on the KITTI dataset [15] using the split of Eigen et al. [7] (Baseline numbers taken from [16]). For training, K = KITTI, and CS = Cityscapes [5]. All methods we compare with use some form of supervision (either ground-truth depth or calibrated camera pose) during training. Note: results from Garg et al. [14] are capped at 50m depth, so we break these out separately in the lower part of the table. Input image Ours (CS) Ours (CS + KITT!) Figure 7. Comparison of single-view depth predictions on the KITTI dataset by our initial Cityscapes model and the final model (pre-trained on Cityscapes and then fine-tuned on KITTI). The Cityscapes model sometimes makes structural mistakes (e.g. holes on car body) likely due to the domain gap between the two datasets. Input Ground-truth Ours 4 |
1704.07813#29
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
30
Input Ground-truth Ours 4 | Method Supervision Error metric Depth Pose AbsRel SqRel RMSE RMSE log Train set mean v 0.876 = 13.98 12.27 0.307 Karsch et al.[25] ¥ 0.428 5.079 8.389 0.149 Liu et al. [33] v 0.475 6.562 10.05 0.165 Laina er al. [31] v 0.204 1.840 5.683 0.084 Godard et al. [16] Y—_O544 1094 11-76 0.193 Ours 0.383 5.321 1047 0.478 Table 2. Results on the Make3D dataset [42]. Similar to ours, Go- dard et al. [16] do not utilize any of the Make3D data during train- ing, and directly apply the model trained on KITTI+Cityscapes to the test set. Following the evaluation protocol of [16], the errors are only computed where depth is less than 70 meters in a central image crop. Figure 8. Our sample predictions on the Make3D dataset. Note that our model is trained on KITTI + Cityscapes only, and directly tested on Make3D.
1704.07813#30
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
31
Figure 8. Our sample predictions on the Make3D dataset. Note that our model is trained on KITTI + Cityscapes only, and directly tested on Make3D. the scaling factor for the predictions made by each method to best align with the ground truth, and then measure the Absolute Trajec- tory Error (ATE) [37] as the metric. ATE is computed on 5-frame snippets and averaged over the full sequence.3 As shown in Table 3 and Fig. 9, our method outperforms both baselines (mean odome- try and ORB-SLAM (short)) that share the same input setting as ours, but falls short of ORB-SLAM (full), which leverages whole sequences (1591 for seq. 09 and 1201 for seq. 10) for loop closure and re-localization. For better understanding of our pose estimation results, we show in Figure 9 the ATE curve with varying amount of sideof car motion (using ground-truth odometry) for 5-frame snippets. To resolve scale ambiguity during evaluation, we first optimize 3For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 5-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet.
1704.07813#31
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
32
3For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 5-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet. Method Seq. 09 Seq. 10 ORB-SLAM (full) 0.014 ± 0.008 0.012 ± 0.011 ORB-SLAM (short) Mean Odom. Ours 0.064 ± 0.141 0.032 ± 0.026 0.021 ± 0.017 0.064 ± 0.130 0.028 ± 0.023 0.020 ± 0.015 Table 3. Absolute Trajectory Error (ATE) on the KITTI odome- try split averaged over all 5-frame snippets (lower is better). Our method outperforms baselines with the same input setting, but falls short of ORB-SLAM (full) that uses strictly more data. ° ° f= oe ° S o ‘Mean Odom. ORB-SLAM (ull) ORB-SLAM (short) Ours ° fey Do ° Absolute Translation Error (m) ° ° Rg 0 0.4 0.2 0.3 0.4 05 Left/right turning magnitude (m)
1704.07813#32
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
33
Figure 9. Absolute Trajectory Error (ATE) at different left/right turning magnitude (coordinate difference in the side-direction be- tween the start and ending frame of a testing sequence). Our method performs significantly better than ORB-SLAM (short) when side rotation is small, and is comparable with ORB-SLAM (full) across the entire spectrum. rotation by the car between the beginning and the end of a se- quence. Figure 9 suggests that our method is significantly bet- ter than ORB-SLAM (short) when the side-rotation is small (i.e. car mostly driving forward), and comparable to ORB-SLAM (full) across the entire spectrum. The large performance gap between ours and ORB-SLAM (short) suggests that our learned ego-motion could potentially be used as an alternative to the local estimation modules in monocular SLAM systems. # 4.3. Visualizing the explainability prediction
1704.07813#33
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
34
# 4.3. Visualizing the explainability prediction We visualize example explainability masks predicted by our network in Figure 10. The first three rows suggest that the network has learned to identify dynamic objects in the scene as unexplain- able by our model, and similarly, rows 4–5 are examples of ob- jects that disappear from the frame in subsequent views. The last two rows demonstrate the potential downside of explainability- weighted loss: the depth CNN has low confidence in predicting thin structures well, and tends to mask them as unexplainable. # 5. Discussion We have presented an end-to-end learning pipeline that utilizes the task of view synthesis for supervision of single-view depth and camera pose estimation. The system is trained on unlabeled videos, and yet performs comparably with approaches that require ground-truth depth or pose for training. Despite good performance on the benchmark evaluation, our method is by no means close to solving the general problem of unsupervised learning of 3D scene structure inference. A number of major challenges are yet to be Target view Explanability mask Source view Figure 10. Sample visualizations of the explainability masks. Highlighted pixels are predicted to be unexplainable by the net- work due to motion (rows 1–3), occlusion/visibility (rows 4–5), or other factors (rows 7–8).
1704.07813#34
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
35
addressed: 1) our current framework does not explicitly estimate scene dynamics and occlusions (although they are implicitly taken into account by the explainability masks), both of which are crit- ical factors in 3D scene understanding. Direct modeling of scene dynamics through motion segmentation (e.g. [48, 40]) could be a potential solution; 2) our framework assumes the camera intrinsics are given, which forbids the use of random Internet videos with un- known camera types/calibration – we plan to address this in future work; 3) depth maps are a simplified representation of the under- lying 3D scene. It would be interesting to extend our framework to learn full 3D volumetric representations (e.g. [46]). Another interesting area for future work would be to investi- gate in more detail the representation learned by our system. In particular, the pose network likely uses some form of image cor- respondence in estimating the camera motion, whereas the depth estimation network likely recognizes common structural features of scenes and objects. It would be interesting to probe these, and investigate the extent to which our network already performs, or could be re-purposed to perform, tasks such as object detection and semantic segmentation.
1704.07813#35
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
36
Acknowledgments: We thank our colleagues, Sudheendra Vijaya- narasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Ka- terina Fragkiadaki for their help. We also thank the anonymous reviewers for their valuable comments. TZ would like to thank Shubham Tulsiani for helpful discussions, and Clement Godard for sharing the evaluation code. This work is also partially funded by Intel/NSF VEC award IIS-1539099. # References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. TensorFlow: Large-scale machine learning on heteroge- neous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 5 [2] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In Int. Conf. Computer Vision, 2015. 2 [3] J. Bergen, P. Anandan, K. Hanna, and R. Hingorani. Hier- In Computer Vi- archical model-based motion estimation. sionECCV’92, pages 237–252. Springer, 1992. 4
1704.07813#36
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
37
[4] S. E. Chen and L. Williams. View interpolation for image synthesis. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, pages 279– 288. ACM, 1993. 2 [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The Cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213–3223, 2016. 5, 7 [6] P. E. Debevec, C. J. Taylor, and J. Malik. Modeling and ren- dering architecture from photographs: A hybrid geometry- and image-based approach. In Proceedings of the 23rd an- nual conference on Computer graphics and interactive tech- niques, pages 11–20. ACM, 1996. 2 [7] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction In from a single image using a multi-scale deep network. Advances in Neural Information Processing Systems, 2014. 2, 5, 6, 7
1704.07813#37
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
38
[8] C. Fehn. Depth-image-based rendering (dibr), compression, and transmission for a new approach on 3d-tv. In Electronic Imaging 2004, pages 93–104. International Society for Op- tics and Photonics, 2004. 3 [9] A. Fitzgibbon, Y. Wexler, and A. Zisserman. Image-based Int. Journal of Com- rendering using image-based priors. puter Vision, 63(2):141–151, 2005. 2 [10] J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deep- Stereo: Learning to predict new views from the world’s im- agery. In Computer Vision and Pattern Recognition, 2016. 1, 2, 3 [11] D. F. Fouhey, W. Hussain, A. Gupta, and M. Hebert. Single image 3D without a single 3D image. In Proceedings of the IEEE International Conference on Computer Vision, pages 1053–1061, 2015. 2 [12] Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski. To- wards internet-scale multi-view stereo. In Computer Vision and Pattern Recognition, pages 1434–1441. IEEE, 2010. 2
1704.07813#38
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
39
[13] M. Gadelha, S. Maji, and R. Wang. tion from 2d views of multiple objects. arXiv:1612.05872, 2016. 2 3d shape induc- arXiv preprint [14] R. Garg, V. K. BG, G. Carneiro, and I. Reid. Unsupervised CNN for single view depth estimation: Geometry to the res- cue. In European Conf. Computer Vision, 2016. 1, 2, 3, 4, 5, 6, 7 [15] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3354–3361. IEEE, 2012. 2, 5, 7 [16] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In Computer Vision and Pattern Recognition, 2017. 1, 2, 3, 4, 5, 7
1704.07813#39
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
40
[17] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. Le- Cun. Unsupervised learning of spatiotemporally coherent metrics. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 4086–4093, 2015. 2 [18] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. MatchNet: Unifying feature and metric learning for patch- based matching. In Computer Vision and Pattern Recogni- tion, pages 3279–3286, 2015. 2 [19] A. Handa, M. Bloesch, V. Patraucean, S. Stent, J. McCor- mac, and A. Davison. gvnn: Neural network library for ge- ometric computer vision. arXiv preprint arXiv:1607.07405, 2016. 2 [20] D. Hoiem, A. A. Efros, and M. Hebert . Automatic photo pop-up. In Proc. SIGGRAPH, 2005. 2 [21] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5
1704.07813#40
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
41
In In- ternational Workshop on Vision Algorithms, pages 267–277. Springer, 1999. 2 [23] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial In Advances in Neural Information transformer networks. Processing Systems, pages 2017–2025, 2015. 3 [24] D. Jayaraman and K. Grauman. Learning image representa- tions tied to egomotion. In Int. Conf. Computer Vision, 2015. 2 [25] K. Karsch, C. Liu, and S. B. Kang. Depth transfer: Depth extraction from video using non-parametric sampling. IEEE transactions on pattern analysis and machine intelligence, 36(11):2144–2158, 2014. 7 [26] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A convo- lutional network for real-time 6-DOF camera relocalization. In Int. Conf. Computer Vision, pages 2938–2946, 2015. 2 [27] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. arXiv preprint arXiv:1703.04309, 2017. 2
1704.07813#41
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
42
[28] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 5 [29] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages 2539–2547. Curran Associates, Inc., 2015. 2 [30] Y. Kuznietsov, J. St¨uckler, and B. Leibe. Semi-supervised deep learning for monocular depth map prediction. arXiv preprint arXiv:1702.02706, 2017. 2 [31] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab. Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth Interna- tional Conference on, pages 239–248. IEEE, 2016. 7
1704.07813#42
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
43
[32] F. Liu, C. Shen, G. Lin, and I. Reid. Learning depth from sin- gle monocular images using deep convolutional neural fields. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):2024–2039, 2016. 7 [33] M. Liu, M. Salzmann, and X. He. Discrete-continuous depth estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716–723, 2014. 7 [34] M. M. Loper and M. J. Black. OpenDR: An approximate differentiable renderer. In European Conf. Computer Vision, pages 154–169. Springer, 2014. 2 [35] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train con- volutional networks for disparity, optical flow, and scene In Proceedings of the IEEE Conference flow estimation. on Computer Vision and Pattern Recognition, pages 4040– 4048, 2016. 4
1704.07813#43
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
44
[36] I. Misra, C. L. Zitnick, and M. Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision, pages 527–544. Springer, 2016. 2 [37] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos. ORB- SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 2015. 6, 7 [38] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. In Int. DTAM: Dense tracking and mapping in real-time. Conf. Computer Vision, pages 2320–2327. IEEE, 2011. 2 [39] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hariha- ran. Learning features by watching objects move. In CVPR, 2017. 2 [40] R. Ranftl, V. Vineet, Q. Chen, and V. Koltun. Dense monoc- ular depth estimation in complex dynamic scenes. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4058–4066, 2016. 8
1704.07813#44
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
45
[41] D. J. Rezende, S. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of 3d structure from images. In Advances In Neural Information Processing Systems, pages 4997–5005, 2016. 2 [42] A. Saxena, M. Sun, and A. Y. Ng. Make3D: Learning 3D scene structure from a single still image. Pattern Analysis and Machine Intelligence, 31(5):824–840, May 2009. 2, 5, 7 [43] S. M. Seitz and C. R. Dyer. View morphing. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 21–30. ACM, 1996. 2 [44] R. Szeliski. Prediction error as a quality metric for motion and stereo. In Int. Conf. Computer Vision, volume 2, pages 781–788. IEEE, 1999. 1 [45] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Multi-view 3d models from single images with a convolutional network. In European Conference on Computer Vision, pages 322–337. Springer, 2016. 2
1704.07813#45
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
46
[46] S. Tulsiani, T. Zhou, A. A. Efros, and J. Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Computer Vision and Pattern Recogni- tion, 2017. 2, 8 [47] B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox. DeMoN: Depth and motion network for learning monocular stereo. arXiv preprint arXiv:1612.02401, 2016. 4 [48] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki. SfM-Net: Learning of structure and mo- tion from video. arXiv preprint, 2017. 2, 4, 8 [49] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 2794–2802, 2015. 2 [50] C. Wu. VisualSFM: A visual structure from motion system. http://ccwu.me/vsfm, 2011. 2
1704.07813#46
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
47
[50] C. Wu. VisualSFM: A visual structure from motion system. http://ccwu.me/vsfm, 2011. 2 [51] J. Xie, R. B. Girshick, and A. Farhadi. Deep3D: Fully au- tomatic 2D-to-3D video conversion with deep convolutional neural networks. In European Conf. Computer Vision, 2016. 2 [52] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d object reconstruc- tion without 3d supervision. In Advances in Neural Informa- tion Processing Systems, pages 1696–1704, 2016. 2
1704.07813#47
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07813
48
[53] J. Zbontar and Y. LeCun. Stereo matching by training a con- volutional neural network to compare image patches. Jour- nal of Machine Learning Research, 17(1-32):2, 2016. 2 [54] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In European Conference on Computer Vision, pages 286–301. Springer, 2016. 2, 3 [55] C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski. High-quality video view interpolation using a In ACM Transactions on Graphics layered representation. (TOG), volume 23, pages 600–608. ACM, 2004. 2
1704.07813#48
Unsupervised Learning of Depth and Ego-Motion from Video
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.
http://arxiv.org/pdf/1704.07813
Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe
cs.CV
Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/
null
cs.CV
20170425
20170801
[ { "id": "1502.03167" }, { "id": "1702.02706" }, { "id": "1703.04309" }, { "id": "1607.07405" }, { "id": "1603.04467" }, { "id": "1612.05872" }, { "id": "1612.02401" } ]
1704.07138
1
# Qun Liu ADAPT Centre Dublin City University [email protected] # Abstract We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lex- ical constraints. The algorithm can be used with any model that generates a se- quence Â¥ = {yo... yr}, by maximizing plyls) = [I] purl {yo ---ye—1}). Lex- ical constraints take the form of phrases or words that must be present in the out- put sequence. This is a very general way to incorporate additional knowledge into a model’s output without requiring any modification of the model parameters or training data. We demonstrate the feasibil- ity and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenar- ios, and that, even without any user in- put, GBS can be used to achieve signifi- cant gains in performance in domain adap- tation scenarios.
1704.07138#1
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
2
time. Humans can provide corrections after view- ing a system’s initial output, or separate classifi- cation models may be able to predict parts of the output with high confidence. When the domain of the input is known, a domain terminology may be employed to ensure specific phrases are present in a system’s predictions. Our goal in this work is to find a way to force the output of a model to contain such lexical constraints, while still taking advan- tage of the distribution learned from training data. For Machine Translation (MT) usecases in par- ticular, final translations are often produced by combining automatically translated output with user Examples include Post-Editing (PE) (Koehn, 2009; Specia, 2011) and Interactive- Predictive MT (Foster, 2002; Barrachina et al., 2009; Green, 2014). These interactive scenarios can be unified by considering user inputs to be lex- ical constraints which guide the search for the op- timal output sequence.
1704.07138#2
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
3
In this paper, we formalize the notion of lexi- cal constraints, and propose a decoding algorithm which allows the specification of subsequences that are required to be present in a model’s out- put. Individual constraints may be single tokens or multi-word phrases, and any number of constraints may be specified simultaneously. # Introduction The output of many natural language processing models is a sequence of text. Examples include automatic summarization (Rush et al., 2015), ma- chine translation (Koehn, 2010; Bahdanau et al., 2014), caption generation (Xu et al., 2015), and di- alog generation (Serban et al., 2016), among oth- ers. Although we focus upon interactive applica- tions for MT in our experiments, lexically con- to any scenario strained decoding is relevant where a model is asked to generate a sequence ˆy = {y0 . . . yT } given both an input x, and a set {c0...cn}, where each ci is a sub-sequence {ci0 . . . cij}, that must appear somewhere in ˆy. This makes our work applicable to a wide range of text generation scenarios, including image de- scription, dialog generation, abstractive summa- rization, and question answering.
1704.07138#3
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
4
In some real-world scenarios, additional infor- mation that could inform the search for the opti- mal output sequence may be available at inference The rest of this paper is organized as follows: Section 2 gives the necessary background for our Constraint 1 Rechte <S> Thre miissen vor Start Continue Generate zz || > | el ihrer e . . Continue Continue Generate e <S> e Start Generate Constraint 2 Abreise | geschiitzt werden ° e e e e Start | Continue Generate Generate </S> Input: Rights protection should begin before their departure . 1: A visualization of the for actual from MT The Figure 1: A visualization of the decoding process for an actual example from our English-German MT experiments. The output token at each timestep appears at the top of the figure, with lexical constraints enclosed in boxes. Generation is shown in blue, Starting new constraints in green, and Continuing constraints in red. The function used to create the hypothesis at each timestep is written at the bottom. Each box in the grid represents a beam; a colored strip inside a beam represents an individual hypothesis in the beam’s k-best stack. Hypotheses with circles inside them are closed, all other hypotheses are open. (Best viewed in colour).
1704.07138#4
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
5
discussion of GBS, Section 3 discusses the lex- ically constrained decoding algorithm in detail, Section 4 presents our experiments, and Section 5 gives an overview of closely related work. and the already-generated symbols {y0 . . . yi−t}. However, greedy selection of the most probable output at each timestep, i.e.: # 2 Background: Beam Search for Sequence Generation ˆyt = argmax yi∈{v} p(yi|x; {y0 . . . yt−1}), (3) Under a model parameterized by θ, let the best output sequence ˆy given input x be Eq. 1. ˆy = argmax y∈{y[T]} pθ(y|x), (1) where we use {y[T]} to denote the set of all se- quences of length T . Because the number of pos- sible sequences for such a model is |v|T , where |v| is the number of output symbols, the search for ˆy can be made more tractable by factorizing pθ(y|x) into Eq. 2: T po(y|x) = |] po(uelxs {yo---yea}). 2) t=0
1704.07138#5
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
6
T po(y|x) = |] po(uelxs {yo---yea}). 2) t=0 The standard approach is thus to generate the output sequence from beginning to end, condition- ing the output at each timestep upon the input x, risks making locally optimal decisions which are actually globally sub-optimal. On the other hand, an exhaustive exploration of the output space would require scoring |v|T sequences, which is intractable for most real-world models. Thus, a search or decoding algorithm is often used as a compromise between these two extremes. A com- mon solution is to use a heuristic search to at- tempt to find the best output efficiently (Pearl, 1984; Koehn, 2010; Rush et al., 2013). The key idea is to discard bad options early, while trying to avoid discarding candidates that may be locally risky, but could eventually result in the best overall output. Beam search (Och and Ney, 2004) is probably the most popular search algorithm for decoding se- quences. Beam search is simple to implement, and is flexible in the sense that the semantics of the (A) (B) © Er \ f | (©) Input Coverage Time f i
1704.07138#6
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
7
(A) (B) © Er \ f | (©) Input Coverage Time f i Figure 2: Different structures for beam search. Boxes repre- sent beams which hold k-best lists of hypotheses. (A) Chart Parsing using SCFG rules to cover spans in the input. (B) Source coverage as used in PB-SMT. (C) Sequence timesteps (as used in Neural Sequence Models), GBS is an extension of (C). In (A) and (B), hypotheses are finished once they reach the final beam. In (C), a hypothesis is only complete if it has generated an end-of-sequence (EOS) symbol.
1704.07138#7
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
8
graph of beams can be adapted to take advantage of additional structure that may be available for specific tasks. For example, in Phrase-Based Sta- tistical MT (PB-SMT) (Koehn, 2010), beams are organized by the number of source words that are covered by the hypotheses in the beam – a hypoth- esis is “finished” when it has covered all source words. In chart-based decoding algorithms such as CYK, beams are also tied to coverage of the input, but are organized as cells in a chart, which facili- tates search for the optimal latent structure of the output (Chiang, 2007). Figure 2 visualizes three common ways to structure search. (A) and (B) de- pend upon explicit structural information between the input and output, (C) only assumes that the output is a sequence where later symbols depend upon earlier ones. Note also that (C) corresponds exactly to the bottom rows of Figures 1 and 3.
1704.07138#8
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
9
With the recent success of neural models for text generation, beam search has become the de-facto choice for decoding optimal output se- quences (Sutskever et al., 2014). However, with neural sequence models, we cannot organize beams by their explicit coverage of the input. A simpler alternative is to organize beams by output timesteps from t0 · · · tN , where N is a hyperpa- rameter that can be set heuristically, for example by multiplying a factor with the length of the in- put to make an educated guess about the maximum length of the output (Sutskever et al., 2014). Out- put sequences are generally considered complete once a special “end-of-sentence”(EOS) token has been generated. Beam size in these models is also typically kept small, and recent work has shown Constraint Coverage Time Figure 3: Visualizing the lexically constrained decoder’s complete search graph. Each rectangle represents a beam containing k hypotheses. Dashed (diagonal) edges indicate starting or continuing constraints. Horizontal edges repre- sent generating from the model’s distribution. The horizontal axis covers the timesteps in the output sequence, and the ver- tical axis covers the constraint tokens (one row for each token in each constraint). Beams on the top level of the grid contain hypotheses which cover all constraints.
1704.07138#9
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
10
that the performance of some architectures can ac- tually degrade with larger beam size (Tu et al., 2016). # 3 Grid Beam Search Our goal is to organize decoding in such a way that we can constrain the search space to outputs which contain one or more pre-specified sub-sequences. We thus wish to use a model’s distribution both to “place” lexical constraints correctly, and to gener- ate the parts of the output which are not covered by the constraints. Algorithm 1 presents the pseudo-code for lex- ically constrained decoding, see Figures 1 and 3 for visualizations of the search process. Beams in the grid are indexed by t and c. The t vari- able tracks the timestep of the search, while the c variable indicates how many constraint tokens are covered by the hypotheses in the current beam. Note that each step of c covers a single constraint token. In other words, constraints is an array of sequences, where individual tokens can be indexed as constraintsij, i.e. tokenj in constrainti. The numC parameter in Algorithm 1 represents the to- tal number of tokens in all constraints. The hypotheses in a beam can be separated into two types (see lines 9-11 and 15-19 of Algo- rithm 1): 1. open hypotheses can either generate from the model’s distribution, or start available con- straints,
1704.07138#10
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
12
Algorithm 1 Pseudo-code for Grid Beam Search, note that t and c indices are 0-based 1: procedure CONSTRAINEDSEARCH(model, input, constraints, maxLen, numC, k) startHyp < model.getStartHyp(input, constraints) Grid < initGrid(maz Len, numC, k) Grid(0][0] = startHyp fort=1, t++, t< mazLendo > initialize beams in grid for c = max(0, (numC +t) —maxLen), c++, c¢< min(t,numC) do n,8,g9=0 for each hyp € Grid{[t — 1][c] do if hyp.isOpen() then end if end for if c > 0 then if hyp.isOpen() then else : end if 20: end for 21: end if 22: Grid[{t][c] = k-argmax model.score(h) henUsUg 23: end for 24: end for 25: topLevelHyps = Grid|[:][numC] 26: finishedHyps = hasEOS(topLevelHyps) model.score(h) 27: bestHyp = — argmax 2: 3 4 5 6 7 8 9 0: g = g9U model.generate(hyp,
1704.07138#12
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
13
model.score(h) 27: bestHyp = — argmax 2: 3 4 5 6 7 8 9 0: g = g9U model.generate(hyp, input, constraints) 1 2: 3 4 5 6 7 8 s = sU model.continue(hyp, input, constraints) 9 > generate new open hyps for each hyp € Grid{t — 1][c — 1] do n <= nl model:start(hyp, input, constraints) > start new constrained hyps > continue unfinished > k-best scoring hypotheses stay on the beam > get hyps in top-level beams > finished hyps have generated the EOS token
1704.07138#13
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
14
27: model.score(h) h∈f inishedHyps return bestHyp # 28: 29: end procedure token for in a currently unfinished constraint. the search the beam at Grid[t][c] is filled with candidates which may be created in three ways: 1. the open hypotheses in the beam to the left (Grid[t − 1][c]) may generate con- tinuations from the model’s distribution pθ(yi|x, {y0 . . . yi−1}), 2. the open hypotheses in the beam to the left and below (Grid[t−1][c−1]) may start new constraints, 3. the closed hypotheses in the beam to the left and below (Grid[t − 1][c − 1]) may continue constraints. start, and continue, which build new hypotheses in each of the three ways. Note that the scoring function of the model does not need to be aware of the existence of constraints, but it may be, for ex- ample via a feature which indicates if a hypothesis is part of a constraint or not.
1704.07138#14
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
15
The beams at the top level of the grid (beams where c = numConstraints) contain hypothe- ses which cover all of the constraints. Once a hy- pothesis on the top level generates the EOS token, it can be added to the set of finished hypotheses. The highest scoring hypothesis in the set of fin- ished hypotheses is the best sequence which cov- ers all constraints.1 the model in Algorithm 1 imple- Therefore, ments an interface with three functions: generate, 1Our implementation of GBS is available at https: //github.com/chrishokamp/constrained_ decoding # 3.1 Multi-token Constraints By distinguishing between open and closed hy- potheses, we can allow for arbitrary multi-token phrases in the search. Thus, the set of constraints for a particular output may include both individ- ual tokens and phrases. Each hypothesis main- tains a coverage vector to ensure that constraints cannot be repeated in a search path – hypotheses which have already covered constrainti can only generate, or start constraints that have not yet been covered.
1704.07138#15
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
16
Note also that discontinuous lexical constraints, such as phrasal verbs in English or German, are easy to incorporate into GBS, by adding filters to the search, which require that one or more con- ditions must be met before a constraint can be used. For example, adding the phrasal verb “ask (someone) out” as a constraint would mean using “ask” as constraint and “out” as constraint, with two filters: one requiring that constraint, cannot be used before constraintg, and another requiring that there must be at least one generated token between the constraints. # 3.2 Subword Units
1704.07138#16
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
17
# 3.2 Subword Units Both the computation of the score for a hypoth- esis, and the granularity of the tokens (character, subword, word, etc...) are left to the underlying model. Because our decoder can handle arbitrary constraints, there is a risk that constraints will con- tain tokens that were never observed in the training data, and thus are unknown by the model. Espe- cially in domain adaptation scenarios, some user- specified constraints are very likely to contain un- seen tokens. Subword representations provide an elegant way to circumvent this problem, by break- ing unknown or rare tokens into character n-grams which are part of the model’s vocabulary (Sen- nrich et al., 2016; Wu et al., 2016). In the ex- periments in Section 4, we use this technique to ensure that no input tokens are unknown, even if a constraint contains words which never appeared in the training data.2 # 3.3 Efficiency Because the number of beams is multiplied by the number of constraints, the runtime complexity of a naive implementation of GBS is O(ktc). Stan- dard time-based beam search is O(kt); therefore, 2If a character that was not observed in training data is observed at prediction time, it will be unknown. However, we did not observe this in any of our experiments.
1704.07138#17
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
18
2If a character that was not observed in training data is observed at prediction time, it will be unknown. However, we did not observe this in any of our experiments. some consideration must be given to the efficiency of this algorithm. Note that the beams in each col- umn c of Figure 3 are independent, meaning that GBS can be parallelized to allow all beams at each timestep to be filled simultaneously. Also, we find that the most time is spent computing the states for the hypothesis candidates, so by keeping the beam size small, we can make GBS significantly faster. # 3.4 Models
1704.07138#18
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
19
# 3.4 Models The models used for our experiments are state- of-the-art Neural Machine Translation (NMT) sys- tems using our own implementation of NMT with attention over the source sequence (Bahdanau et al., 2014). We used Blocks and Fuel to im- plement our NMT models (van Merrinboer et al., 2015). To conduct the experiments in the fol- lowing section, we trained baseline translation models for English–German (EN-DE), English– French (EN-FR), and English–Portuguese (EN- PT). We created a shared subword representation for each language pair by extracting a vocabulary of 80000 symbols from the concatenated source and target data. See the Appendix for more de- tails on our training data and hyperparameter con- figuration for each language pair. The beamSize parameter is set to 10 for all experiments.
1704.07138#19
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
20
Because our experiments use NMT models, we can now be more explicit about the implemen- tations of the generate, start, and continue For an functions for this GBS instantiation. NMT model at timestep t, generate(hypt−1) first computes a vector of output probabilities ot = sof tmax(g(yt−1, si, ci))3 using the state infor- mation available from hypt−1. and returns the best k continuations, i.e. Eq. 4: gt = k-argmax oti. (4) i The start and continue functions simply index into the softmax output of the model, selecting specific tokens instead of doing a k-argmax over the entire target language vocabulary. For exam- ple, to start constraint ci, we find the score of to- ken ci0 , i.e. otci0. # 4 Experiments # 4.1 Pick-Revise for Interactive Post Editing Pick-Revise is an interaction cycle for MT Post- Editing proposed by Cheng et al. (2016). Starting 3we use the notation for the g function from Bahdanau et al. (2014)
1704.07138#20
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
21
3we use the notation for the g function from Bahdanau et al. (2014) ITERATION 0 1 2 3 Strict Constraints EN-DE EN-FR EN-PT* Relaxed Constraints EN-DE EN-FR EN-PT* 18.44 28.07 15.41 18.44 28.07 15.41 27.64 (+9.20) 36.71 (+8.64) 23.54 (+8.25) 26.43 (+7.98) 33.8 (+5.72) 23.22 (+7.80) 36.66 (+9.01) 44.84 (+8.13) 31.14 (+7.60) 34.48 (+8.04) 40.33 (+6.53) 33.82 (+10.6) 43.92 (+7.26) 45.48 +(0.63) 35.89 (+4.75) 41.82 (+7.34) 47.0 (+6.67) 40.75 (+6.93)
1704.07138#21
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
22
Table 1: Results for four simulated editing cycles using WMT test data. EN-DE uses newstest2013, EN-FR uses newstest2014, and EN-PT uses the Autodesk corpus discussed in Section 4.2. Improvement in BLEU score over the previous cycle is shown in parentheses. * indicates use of our test corpus created from Autodesk post-editing data. with the original translation hypothesis, a (sim- ulated) user first picks a part of the hypothesis which is incorrect, and then provides the correct translation for that portion of the output. The user- provided correction is then used as a constraint for the next decoding cycle. The Pick-Revise process can be repeated as many times as necessary, with a new constraint being added at each cycle. data that contains the same placeholders which oc- cur in the test data (Crego et al., 2016). The MT system also loses any possibility to model the to- kens in the terminology, since they are represented by abstract tokens such as “(TERM_1)”. An at- tractive alternative is to simply provide term map- pings as constraints, allowing any existing system to adapt to the terminology used in a new test do- main.
1704.07138#22
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
23
We modify the experiments of Cheng et al. (2016) slightly, and assume that the user only pro- vides sequences of up to three words which are missing from the hypothesis.4 To simulate user interaction, at each iteration we chose a phrase of up to three tokens from the reference transla- tion which does not appear in the current MT hy- potheses. In the strict setting, the complete phrase must be missing from the hypothesis. In the re- laxed setting, only the first word must be missing. Table 1 shows results for a simulated editing ses- sion with four cycles. When a three-token phrase cannot be found, we backoff to two-token phrases, then to single tokens as constraints. If a hypoth- esis already matches the reference, no constraints are added. By specifying a new constraint of up to three words at each cycle, an increase of over 20 BLEU points is achieved in all language pairs.
1704.07138#23
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
24
For the target domain data, we use the Autodesk Post-Editing corpus (Zhechev, 2012), which is a dataset collected from actual MT post-editing ses- sions. The corpus is focused upon software local- ization, a domain which is likely to be very dif- ferent from the WMT data used to train our gen- eral domain models. We divide the corpus into ap- proximately 100,000 training sentences, and 1000 test segments, and automatically generate a termi- nology by computing the Pointwise Mutual Infor- mation (PMI) (Church and Hanks, 1990) between source and target n-grams in the training set. We extract all n-grams from length 2-5 as terminology candidates. pmi(x; y) = log p(x, y) p(x)p(y) (5) # 4.2 Domain Adaptation via Terminology npmi(x; y) = pmi(x; y) h(x, y) (6) The requirement for use of domain-specific termi- nologies is common in real-world applications of MT (Crego et al., 2016). Existing approaches in- corporate placeholder tokens into NMT systems, which requires modifying the pre- and post- pro- cessing of the data, and training the system with
1704.07138#24
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
25
4NMT models do not use explicit alignment between source and target, so we cannot use alignment information to map target phrases to source phrases Equations 5 and 6 show how we compute the normalized PMI for a terminology candidate pair. The PMI score is normalized to the range [−1, +1] by dividing by the entropy h of the joint prob- ability p(x, y). We then filter the candidates to only include pairs whose PMI is ≥ 0.9, and where both the source and target phrases occur at least five times in the corpus. When source phrases that match the terminology are observed in the test data, the corresponding target phrase is added to the constraints for that segment. Results are shown in Table 2. As a sanity check that improvements in BLEU are not merely due to the presence of the terms that the placement somewhere in the output, i.e. of the terms by GBS is reasonable, we also eval- uate the results of randomly inserting terms into the baseline output, and of prepending terms to the baseline output.
1704.07138#25
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
26
This simple method of domain adaptation leads to a significant improvement in the BLEU score without any human intervention. Surprisingly, even an automatically created terminology com- bined with GBS yields performance improve- ments of approximately +2 BLEU points for En- De and En-Fr, and a gain of almost 14 points for En-Pt. The large improvement for En-Pt is probably due to the training data for this sys- tem being very different from the IT domain (see Appendix). Given the performance improve- ments from our automatically extracted terminol- ogy, manually created domain terminologies with good coverage of the test domain are likely to lead to even greater gains. Using a terminology with GBS is likely to be beneficial in any setting where the test domain is significantly different from the domain of the model’s original training data.
1704.07138#26
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
27
System BLEU EN-DE Baseline Random Beginning GBS EN-FR Baseline Random Beginning GBS EN-PT Baseline Random Beginning GBS 26.17 25.18 (-0.99) 26.44 (+0.26) 27.99 (+1.82) 32.45 31.48 (-0.97) 34.51 (+2.05) 35.05 (+2.59) 15.41 18.26 (+2.85) 20.43 (+5.02) 29.15 (+13.73) Table 2: BLEU Results for EN-DE, EN-FR, and EN-PT ter- minology experiments using the Autodesk Post-Editing Cor- pus. “Random” indicates inserting terminology constraints at random positions in the baseline translation. “Beginning” indicates prepending constraints to baseline translations. # 4.3 Analysis
1704.07138#27
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
28
# 4.3 Analysis Subjective analysis of decoder output shows that phrases added as constraints are not only placed correctly within the output sequence, but also have global effects upon translation quality. This is a desirable effect for user interaction, since it im- plies that users can bootstrap quality by adding the most critical constraints (i.e. those that are most essential to the output), first. Table 3 shows several examples from the experiments in Table 1, where the addition of lexical constraints was able to guide our NMT systems away from initially quite low-scoring hypotheses to outputs which perfectly match the reference translations. # 5 Related Work
1704.07138#28
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
29
# 5 Related Work Most related work to date has presented modifica- tions of SMT systems for specific usecases which constrain MT output via auxilliary inputs. The largest body of work considers Interactive Ma- chine Translation (IMT): an MT system searches for the optimal target-language suffix given a com- plete source sentence and a desired prefix for the target output (Foster, 2002; Barrachina et al., 2009; Green, 2014). IMT can be viewed as sub- case of constrained decoding, where there is only one constraint which is guaranteed to be placed at the beginning of the output sequence. Wuebker et al. (2016) introduce prefix-decoding, which modifies the SMT beam search to first ensure that the target prefix is covered, and only then contin- ues to build hypotheses for the suffix using beams organized by coverage of the remaining phrases in the source segment. Wuebker et al. (2016) and Knowles and Koehn (2016) also present a simple modification of NMT models for IMT, enabling models to predict suffixes for user-supplied pre- fixes.
1704.07138#29
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
30
Recently, some attention has also been given to SMT decoding with multiple lexical constraints. The Pick-Revise (PRIMT) (Cheng et al., 2016) framework for Interactive Post Editing introduces the concept of edit cycles. Translators specify con- straints by editing a part of the MT output that is incorrect, and then asking the system for a new hypothesis, which must contain the user-provided correction. This process is repeated, maintain- ing constraints from previous iterations and adding new ones as needed. Importantly, their approach relies upon the phrase segmentation provided by the SMT system. The decoding algorithm can
1704.07138#30
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
31
EN-DE Source He was also an anti- smoking activist and took part in several campaigns . Original Hypothesis Es war auch ein Anti- Rauch- Aktiv- ist und nahmen an mehreren Kampagnen teil . Reference Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . Constrained Hypothesis Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . Constraints (1) Ebenso setzte er (2) gegen das Rauchen (3) nahm EN-FR Source At that point I was no longer afraid of him and I was able to love him . Original Hypothesis Je n’avais plus peur de lui et j’`etais capable de l’aimer . Reference L´a je n’ai plus eu peur de lui et j’ai pu l’aimer . Constrained Hypothesis L´a je n’ai plus eu peur de lui et j’ai pu l’aimer . Constraints (1) L´a je n’ai (2) j’ai pu
1704.07138#31
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
32
eu peur de lui et j’ai pu l’aimer . Constraints (1) L´a je n’ai (2) j’ai pu (3) eu EN-PT Source Mo- dif- y drain- age features by selecting them individually . Original Hypothesis - J´a temos as caracter´ısticas de extracc¸ ˜ao de idade , com eles individualmente . Reference Modi- fique os recursos de drenagem ao selec- ion- ´a-los individualmente . Constrained Hypothesis Modi- fique os recursos de drenagem ao selec- ion- ´a-los individualmente . Constraints (1) drenagem ao selec- (2) Modi- fique os (3) recursos
1704.07138#32
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
33
Table 3: Manual analysis of examples from lexically constrained decoding experiments. “-” followed by whitespace indicates the internal segmentation of the translation model (see Section 3.2) only make use of constraints that match phrase boundaries, because constraints are implemented as “rules” enforcing that source phrases must be translated as the aligned target phrases that have been selected as constraints. In contrast, our ap- proach decodes at the token level, and is not de- pendent upon any explicit structure in the underly- ing model. Domingo et al. (2016) also consider an interac- tive scenario where users first choose portions of an MT hypothesis to keep, then query for an up- dated translation which preserves these portions. The MT system decodes the source phrases which are not aligned to the user-selected phrases un- til the source sentence is fully covered. This ap- proach is similar to the system of Cheng et al., and uses the “XML input” feature in Moses (Koehn et al., 2007). organized by coverage of the input. # 6 Conclusion Lexically constrained decoding is a flexible way to incorporate arbitrary subsequences into the out- put of any model that generates output sequences token-by-token. A wide spectrum of popular text generation models have this characteristic, and GBS should be straightforward to use with any model that already uses beam search.
1704.07138#33
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
34
In translation interfaces where translators can provide corrections to an existing hypothesis, these user inputs can be used as constraints, gener- ating a new output each time a user fixes an error. By simulating this scenario, we have shown that such a workflow can provide a large improvement in translation quality at each iteration. Some recent work considers the inclusion of soft lexical constraints directly into deep models for dialog generation, and special cases, such as recipe generation from a list of ingredients (Wen et al., 2015; Kiddon et al., 2016). Such constraint- aware models are complementary to our work, and could be used with GBS decoding without any change to the underlying models. To the best of our knowledge, ours is the first work which considers general lexically con- strained decoding for any model which outputs sequences, without relying upon alignments be- tween input and output, and without using a search By using a domain-specific terminology to gen- erate target-side constraints, we have shown that a general domain model can be adapted to a new domain without any retraining. Surprisingly, this simple method can lead to significant performance gains, even when the terminology is created auto- matically.
1704.07138#34
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
35
In future work, we hope to evaluate GBS with models outside of MT, such as automatic sum- marization, image captioning or dialog genera- tion. We also hope to introduce new constraint- aware models, for example via secondary attention mechanisms over lexical constraints. # Acknowledgments This project has received funding from Science Foundation Ireland in the ADAPT Centre for Dig- ital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Re- search Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Develop- ment Fund and the European Union Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). We thank the anony- mous reviewers, as well as Iacer Calixto, Peyman Passban, and Henry Elder for helpful feedback on early versions of this work. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473 .
1704.07138#35
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
36
Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes´us Tom´as, En- rique Vidal, and Juan-Miguel Vilar. 2009. Sta- tistical approaches to computer-assisted transla- Computational Linguistics 35(1):3–28. tion. https://doi.org/10.1162/coli.2008.07-055-R2-06-29. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statisti- cal Machine Translation. Association for Compu- tational Linguistics, Lisbon, Portugal, pages 1–46. http://aclweb.org/anthology/W15-3001.
1704.07138#36
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
37
Shanbo Cheng, Shujian Huang, Huadong Chen, Xinyu Dai, and Jiajun Chen. 2016. PRIMT: A pick- revise framework for interactive machine trans- In NAACL HLT 2016, The 2016 Con- lation. ference of the the North American Chapter of Association for Computational Linguistics: Hu- man Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016. pages 1240–1249. http://aclweb.org/anthology/N/N16/N16-1148.pdf. David Chiang. 2007. Hierarchical phrase-based Comput. Linguist. 33(2):201–228. translation. https://doi.org/10.1162/coli.2007.33.2.201. Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of
1704.07138#37
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
39
Josep Maria Crego, Jungi Kim, Guillaume Klein, An- abel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leid- iana Martins, Dang-Chuan Nguyen, Alexandra Pri- ori, Thomas Riccardi, Natalia Segal, Christophe Ser- van, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Systran’s Jing Zhou, and Peter Zoldan. 2016. pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540. Miguel Domingo, Alvaro Peris, and Francisco Casacu- berta. 2016. Interactive-predictive translation based on multiple word-segments. Baltic J. Modern Com- puting 4(2):282–291. George F. Foster. 2002. Text Prediction for Transla- tors. Ph.D. thesis, Montreal, P.Q., Canada, Canada. AAINQ72434.
1704.07138#39
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
40
George F. Foster. 2002. Text Prediction for Transla- tors. Ph.D. thesis, Montreal, P.Q., Canada, Canada. AAINQ72434. Spence Green. 2014. Mixed-Initiative Natural Lan- Ph.D. thesis, Stanford, CA, guage Translation. United States. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. text generation with In Proceedings of the neural checklist models. 2016 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 329–339. http://aclweb.org/anthology/D/D16/D16-1032.pdf. Rebecca Knowles and Philipp Koehn. 2016. Neural interactive translation prediction. AMTA 2016, Vol. page 107. Philipp Koehn. 2009. A process study of computer- aided translation. Machine Translation 23(4):241– 263. https://doi.org/10.1007/s10590-010-9076-3. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition.
1704.07138#40
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
41
Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Ses- sions. Association for Computational Linguistics, Stroudsburg, PA, USA, ACL ’07, pages 177–180. http://dl.acm.org/citation.cfm?id=1557769.1557821. The alignment template approach to statistical machine Comput. Linguist. 30(4):417–449. translation. https://doi.org/10.1162/0891201042544884. Intelligent Search Strategies for Computer Problem Solving. Addison- Wesley Longman Publishing Co., Inc., Boston, MA, USA.
1704.07138#41
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]