doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.00108 | 3 | As a secondary contribution, we show that us- ing both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain speciï¬c pre-training is not necessary by applying a LM trained in the news domain to scientiï¬c papers.
# 2 Language model augmented sequence taggers (TagLM)
# 2.1 Overview
The main components in our language-model- augmented sequence tagger (TagLM) are illus- trated in Fig. 1. After pre-training word embed- dings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
# 2.2 Baseline sequence tagging model
Our baseline sequence tagging model is a hierar- chical neural tagging model, closely following a number of recent studies (Ma and Hovy, 2016; Lample et al., 2016; Yang et al., 2017; Chiu and Nichols, 2016) (left side of Figure 2).
Given a sentence of tokens (t1, t2, . . . , tN ) it ï¬rst forms a representation, xk, for each token by concatenating a character based representation ck with a token embedding wk: | 1705.00108#3 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 4 | ck = C(tk; θc) wk = E(tk; θw) xk = [ck; wk] (1)
The character representation ck captures morpho- logical information and is either a convolutional neural network (CNN) (Ma and Hovy, 2016; Chiu and Nichols, 2016) or RNN (Yang et al., 2017; Lample et al., 2016). It is parameterized by C(·, θc) with parameters θc. The token embed- dings, wk, are obtained as a lookup E(·, θw), ini- tialized using pre-trained word embeddings, and ï¬ne tuned during training (Collobert et al., 2011). To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, k, the hidden state hk,i of RNN layer i is formed by concatenating the hid- ââ h k,i) and backward den states from the forward ( ââ h k,i) RNNs. As a result, the bidirectional RNN ( is able to use both past and future information to make a prediction at token k. More formally, for the ï¬rst RNN layer that operates on xk to output hk,1: | 1705.00108#4 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 5 | ââ h k,1 = ââ h k,1 = hk,1 = [ ââ h kâ1,1; θââ R 1 ââ h k+1,1; θââ R 1 ââ R 1(xk, ââ R 1(xk, ââ h k,1; ââ h k,1] ) )
(2)
output sequence BLOC ELOC O O tt tf wa located Word LM Two representations of the word âYorkâ Step 3: Use both word embeddings and LM embeddings in the sequence tagging model. embedding |] embedding Step 2: Prepare word embedding and LM embedding for each token in the input sequence. input sequence New is lgcated .. Recurrent language model Step 1: Pretrain word embeddings and language model. unlabeled data
Figure 1: The main components in TagLM, our language-model-augmented sequence tagging system. The language model component (in or- ange) is used to augment the input token represen- tation in a traditional sequence tagging models (in grey). | 1705.00108#5 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 6 | The second RNN layer is similar and uses hk,1 to output hk,2. In this paper, we use L = 2 lay- ers of RNNs in all experiments and parameterize Ri as either Gated Recurrent Units (GRU) (Cho et al., 2014) or Long Short-Term Memory units (LSTM) (Hochreiter and Schmidhuber, 1997) de- pending on the task.
Finally, the output of the ï¬nal RNN layer hk,L is used to predict a score for each possible tag us- ing a single dense layer. Due to the dependencies between successive tags in our sequence label- ing tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneï¬cial to model and decode each sen- tence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bi- gram, computing the sentence conditional random ï¬eld (CRF) loss (Lafferty et al., 2001) using the forward-backward algorithm at training time, and using the Viterbi algorithm to ï¬nd the most likely tag sequence at test time, similar to Collobert et al. (2011). | 1705.00108#6 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 7 | B-Loc +©BF- £10c ~ââ> Sequence t { tagging {Dense hye h,, | Sequence representation ââ bi-RNN So bas /IlP bet ] Til) ae Concat LM bI-RNN (R,) embedding <> <_â > er a a CNN Token CNN/ Token \ ; RNN embedding@__} representation LN New York is located Il Pre-trained bi-LM (5) [J Backward LM Token representation New York is located ... EI â> Forward LM Token representation New York is located ...
# concatenation
# Neural net
# Embedding
Figure 2: Overview of TagLM, our language model augmented sequence tagging architecture. The top level embeddings from a pre-trained bidirectional LM are inserted in a stacked bidirectional RNN sequence tagging model. See text for details.
# 2.3 Bidirectional LM
A language model computes the probability of a token sequence (t1, t2, . . . , tN )
N p(ti, te,...,tn) = [[ ot | ti,te,...,thâ1). k=l | 1705.00108#7 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 8 | N p(ti, te,...,tn) = [[ ot | ti,te,...,thâ1). k=l
Recent state of the art neural language models (J´ozefowicz et al., 2016) use a similar architec- ture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history (t1, t2, . . . , tk) into a ï¬xed dimensional vector ââ h LM . This is the forward LM embedding of the k token at position k and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token tk+1 using a softmax layer over words in the vo- cabulary.
The need to capture future context in the LM embeddings suggests it is beneï¬cial to also con- sider a backward LM in additional to the tradi- tional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with N tokens, it computes
A backward LM can be implemented in an anal- ogous way to a forward LM and produces the backward LM embedding , for the sequence (tk, tk+1, . . . , tN ), the output embeddings of the top layer LSTM. | 1705.00108#8 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 9 | In our ï¬nal system, after pre-training the for- ward and backward LMs separately, we remove the top layer softmax and concatenate the for- ward and backward LM embeddings to form = bidirectional LM embeddings, ââ h LM [ ]. Note that in our formulation, the k forward and backward LMs are independent, with- out any shared parameters.
# 2.4 Combining LM with sequence model
Our combined system, TagLM, uses the LM em- beddings as additional inputs to the sequence tag- ging model. In particular, we concatenate the LM embeddings hLM with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the ï¬rst layer per- formed the best. More formally, we simply replace (2) with
hk,1 = [ ââ h k,1; ââ h k,1; hLM k ]. (3)
N p(t, t2,...,tw) = [] | tet, tep2,---,tw). k=1 | 1705.00108#9 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 10 | N p(t, t2,...,tw) = [] | tet, tep2,---,tw). k=1
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the con- re- catenation and before the second RNN (e.g. ââ placing (3) with f ([ ]) where f h k,1; is a non-linear function). Another possibility in- troduces an attention-like mechanism that weights the all LM embeddings in a sentence before in- cluding them in the sequence model. Our ini- tial results with the simple concatenation were en- couraging so we did not explore these alternatives in this study, preferring to leave them for future work.
# 3 Experiments | 1705.00108#10 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 11 | # 3 Experiments
We evaluate our approach on two well bench- marked sequence tagging tasks, the CoNLL 2003 NER task (Sang and Meulder, 2003) and the CoNLL 2000 Chunking task (Sang and Buch- holz, 2000). We report the ofï¬cial evaluation met- ric (micro-averaged F1). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it out- performs other options (e.g., Ratinov and Roth, 2009). Following Chiu and Nichols (2016), we use the Senna word embeddings (Collobert et al., 2011) and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
CoNLL 2003 NER. The CoNLL 2003 NER task consists of newswire from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). It includes standard train, development and test sets. Following pre- vious work (Yang et al., 2017; Chiu and Nichols, 2016) we trained on both the train and develop- ment sets after tuning hyperparameters on the de- velopment set. | 1705.00108#11 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 12 | The hyperparameters for our baseline model are similar to Yang et al. (2017). We use two bidirec- tional GRUs with 80 hidden units and 25 dimen- sional character embeddings for the token charac- ter encoder. The sequence layer uses two bidirec- tional GRUs with 300 hidden units each. For reg- ularization, we add 25% dropout to the input of each GRU, but not to the recurrent connections.
CoNLL 2000 chunking. The CoNLL 2000 chunking task uses sections 15-18 from the Wall Street Journal corpus for training and section 20 for testing. It deï¬nes 11 syntactic chunk types (e.g., NP, VP, ADJP) in addition to other. We randomly sampled 1000 sentences from the train- ing set as a held-out development set.
The baseline sequence tagger uses 30 dimen- sional character embeddings and a CNN with 30 ï¬lters of width 3 characters followed by a tanh non-linearity for the token character encoder. The sequence layer uses two bidirectional LSTMs with 200 hidden units. Following Ma and Hovy (2016) we added 50% dropout to the character embed- dings, the input to each LSTM layer (but not re- current connections) and to the output of the ï¬nal LSTM layer. | 1705.00108#12 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 13 | Pre-trained language models. The primary bidirectional LMs we used in this study were trained on the 1B Word Benchmark (Chelba et al., 2014), a publicly available benchmark for large- scale language modeling. The training split has approximately 800 million tokens, about a 4000X increase over the number training tokens in the J´ozefowicz et al. (2016) ex- CoNLL datasets. plored several model architectures and released their best single model and training recipes. Fol- lowing Sak et al. (2014), they used linear projec- tion layers at the output of each LSTM layer to reduce the computation time but still maintain a large LSTM state. Their single best model took three weeks to train on 32 GPUs and achieved 30.0 test perplexity. It uses a character CNN with 4096 ï¬lters for input, followed by two stacked LSTMs, each with 8192 hidden units and a 1024 dimen- sional projection layer. We use CNN-BIG-LSTM to refer to this language model in our results. | 1705.00108#13 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 14 | from J´ozefowicz et al. (2016),1 we used the same cor- pus to train two additional language models with forward LSTM-2048-512 fewer parameters: and backward LSTM-2048-512. Both language models use token embeddings as input to a single layer LSTM with 2048 units and a 512 dimension projection layer. We closely followed the proce- dure outlined in J´ozefowicz et al. (2016), except we used synchronous parameter updates across four GPUs instead of asynchronous updates across 32 GPUs and ended training after 10 epochs. The test set perplexities for our forward and backward LSTM-2048-512 language models are 47.7 and 47.3, respectively.2
1https://github.com/tensorflow/models/ tree/master/lm_1b
2Due to different implementations, the perplexity of the forward LM with similar conï¬gurations in J´ozefowicz et al. (2016) is different (45.0 vs. 47.7).
Model Chiu and Nichols (2016) Lample et al. (2016) Ma and Hovy (2016) Our baseline without LM 90.87 ± 0.13 TagLM
Table 1: Test set F1 comparison on CoNLL 2003 NER task, using only CoNLL 2003 data and unla- beled text. | 1705.00108#14 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 16 | Table 2: Test set F1 comparison on CoNLL 2000 Chunking task using only CoNLL 2000 data and unlabeled text.
Training. All experiments use the Adam opti- mizer (Kingma and Ba, 2015) with gradient norms clipped at 5.0. In all experiments, we ï¬ne tune the pre-trained Senna word embeddings but ï¬x all weights in the pre-trained language models. In ad- dition to explicit dropout regularization, we also use early stopping to prevent over-ï¬tting and use the following process to determine when to stop training. We ï¬rst train with a constant learning rate α = 0.001 on the training data and monitor the development set performance at each epoch. Then, at the epoch with the highest development performance, we start a simple learning rate an- nealing schedule: decrease α an order of magni- tude (i.e., divide by ten), train for ï¬ve epochs, de- crease α an order of magnitude again, train for ï¬ve more epochs and stop. | 1705.00108#16 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 17 | Following Chiu and Nichols (2016), we train each ï¬nal model conï¬guration ten times with dif- ferent random seeds and report the mean and stan- dard deviation F1. It is important to estimate the variance of model performance since the test data sets are relatively small.
# 3.1 Overall system results
Tables 1 and 2 compare results from TagLM with previously published state of the art results without additional labeled data or task speciï¬c gazetteers. Tables 3 and 4 compare results of
TagLM to other systems that include additional la- beled data or gazetteers. In both tasks, TagLM es- tablishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the back- ward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean F1, which is a statistically signiï¬- cant increase over the previous best result of 91.62 ±0.33 from Chiu and Nichols (2016) that used gazetteers (at 95%, two-sided Welch t-test, p = 0.021). | 1705.00108#17 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 18 | In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean F1, exceeding all previously published results without additional labeled data by more then 1% absolute F1. The improvement over the previous best result of 95.77 in Hashimoto et al. (2016) that jointly trains with Penn Treebank (PTB) POS tags is statistically signiï¬cant at 95% (p < 0.001 assuming standard deviation of 0.1).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 F1 in the NER and Chunking tasks, respectively. | 1705.00108#18 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 19 | Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 F1 in the NER and Chunking tasks, respectively.
Adding external resources. Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external re- sources (labeled data or task speciï¬c gazetteers) are available. Furthermore, Tables 3 and 4 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the im- provements previously obtained by adding other forms of transfer or joint learning. For example, Yang et al. (2017) noted an improvement of only 0.06 F1 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and Chiu and Nichols (2016) reported an increase of 0.71 F1 when adding gazetteers to their base- line. In the Chunking task, previous work has re- ported from 0.28 to 0.75 improvement in F1 when including supervised labels from the PTB POS tags or CoNLL 2003 entities (Yang et al., 2017; Søgaard and Goldberg, 2016; Hashimoto et al., 2016).
# 3.2 Analysis | 1705.00108#19 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 20 | # 3.2 Analysis
To elucidate the characteristics of our LM aug- mented sequence tagger, we ran a number of addi- tional experiments on the CoNLL 2003 NER task.
How to use LM embeddings? In this experi- ment, we concatenate the LM embeddings at difModel Yang et al. (2017) Chiu and Nichols (2016) with gazetteers with gazetteers Collobert et al. (2011) joint with entity linking Luo et al. (2015) no LM vs TagLM unlabeled data only Ours External resources transfer from CoNLL 2000/PTB-POS F1 Without With â 91.2 90.91 88.67 89.9 90.87 F1 91.26 +0.06 91.62 +0.71 89.59 +0.92 91.2 +1.3 91.93 +1.06
Table 3: task speciï¬c gazetteers (except the case of TagLM where we do not use additional labeled resources). | 1705.00108#20 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 21 | Table 3: task speciï¬c gazetteers (except the case of TagLM where we do not use additional labeled resources).
Model Yang et al. (2017) Hashimoto et al. (2016) Søgaard and Goldberg (2016) Ours External resources transfer from CoNLL 2003/PTB-POS jointly trained with PTB-POS jointly trained with PTB-POS no LM vs TagLM unlabeled data only F1 Without With â 94.66 95.02 95.28 95.00 F1 95.41 +0.75 95.77 +0.75 95.56 +0.28 96.37 +1.37
Table 4: Improvements in test set F1 in CoNLL 2000 Chunking when including additional labeled data (except the case of TagLM where we do not use additional labeled data).
Use LM embeddings at input to the ï¬rst RNN layer output of the ï¬rst RNN layer output of the second RNN layer F1± std 91.55 ± 0.21 91.93 ± 0.19 91.72 ± 0.13
results are consistent with Søgaard and Goldberg (2016) who found that chunking performance was sensitive to the level at which additional POS su- pervision was added. | 1705.00108#21 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 22 | results are consistent with Søgaard and Goldberg (2016) who found that chunking performance was sensitive to the level at which additional POS su- pervision was added.
Table 5: Comparison of CoNLL-2003 test set F1 when the LM embeddings are included at different layers in the baseline tagger.
ferent locations in the baseline sequence tagger. In particular, we used the LM embeddings hLM
# k
⢠augment the input of the ï¬rst RNN layer; i.e., ],
Does it matter which language model to use? In this experiment, we compare six different con- ï¬gurations of the forward and backward language models (including the baseline model which does not use any language models). The results are re- ported in Table 6.
We ï¬nd that adding backward LM embeddings consistently outperforms forward-only LM em- beddings, with F1 improvements between 0.22 and 0.27%, even with the relatively small back- ward LSTM-2048-512 LM.
⢠augment the output of the ï¬rst RNN layer; ââ h k,1; hLM
⢠augment the output of the second RNN layer; ââ h k,2; hLM | 1705.00108#22 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 23 | ⢠augment the output of the second RNN layer; ââ h k,2; hLM
Table 5 shows that the second alternative per- forms best. We speculate that the second RNN layer in the sequence tagging model is able to cap- ture interactions between task speciï¬c context as expressed in the ï¬rst RNN layer and general con- text as expressed in the LM embeddings in a way that improves overall system performance. These
3This conï¬guration the same as Eq. 3 in §2.4. It was re- produced here for convenience.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Bench- mark) improves F1 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would fur- ther improve performance.
To highlight the importance of including lan- guage models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data | 1705.00108#23 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 24 | Forward language model Backward language model LM perplexity F1± std â LSTM-512-256â LSTM-2048-512 LSTM-2048-512 CNN-BIG-LSTM CNN-BIG-LSTM â LSTM-512-256â â LSTM-2048-512 â LSTM-2048-512 Fwd N/A 106.9 47.7 47.7 30.0 30.0 Bwd N/A 104.2 N/A 47.3 N/A 47.3 90.87 ± 0.13 90.79 ± 0.15 91.40 ± 0.18 91.62 ± 0.23 91.66 ± 0.13 91.93 ± 0.19
Table 6: Comparison of CoNLL-2003 test set F1 for different language model combinations. All language models were trained and evaluated on the 1B Word Benchmark, except LSTM-512-256â which was trained and evaluated on the standard splits of the NER CoNLL 2003 dataset. | 1705.00108#24 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 25 | set, we decreased the model size to 512 hidden units with a 256 dimension projection and normal- ized tokens in the same manner as input to the se- quence tagging model (lower-cased, with all dig- its replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding lan- guage models help because they learn composi- tion functions (i.e., the RNN parameters in the lan- guage model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data. | 1705.00108#25 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 26 | the performance of TagLM to our baseline with- out LM. In this scenario, test F1 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% F1 for a similar comparison with the full training dataset. The analogous increases in Yang et al. (2017) are 3.97% for cross-lingual trans- fer from CoNLL 2002 Spanish NER and 6.28% F1 for transfer from PTB POS tags. However, they found only a 0.06% F1 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and signiï¬cantly improves perfor- mance even with larger training sets. | 1705.00108#26 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 27 | Importance of task speciï¬c RNN. To under- stand the importance of including a task speciï¬c sequence RNN we ran an experiment that removed the task speciï¬c sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 F1, well below our baseline. This result conï¬rms that the RNNs in the baseline tag- ger encode essential information which is not en- coded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the lan- guage model which is only trained on unlabeled examples. Note that the LM weights are ï¬xed in this experiment.
Dataset size. A priori, we expect the addition of LM embeddings to be most beneï¬cial in cases where the task speciï¬c annotated datasets are small. To test this hypothesis, we replicated the setup from Yang et al. (2017) that samples 1% of the CoNLL 2003 training set and compared | 1705.00108#27 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 28 | Number of parameters. Our TagLM formula- tion increases the number of parameters in the sec- ond RNN layer R2 due to the increase in the input dimension h1 if all other hyperparameters are held constant. To conï¬rm that this did not have a ma- terial impact on the results, we ran two additional experiments. In the ï¬rst, we trained a system with- out a LM but increased the second RNN layer hid- den dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% F1) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden di- mension of the second RNN layer in TagLM to give it the same number of parameters as the base- line no LM model. In this case, test F1 increased slightly to 92.00 ± 0.11 indicating that the addi- tional parameters in TagLM are slightly hurting
# performance|}|
performance.4 | 1705.00108#28 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 29 | # performance|}|
performance.4
Does the LM transfer across domains? One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks in the 1 Billion Word and the unlabeled text Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitiv- ity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE.5 Scien- ceIE requires end-to-end joint entity and relation- ship extraction from scientiï¬c publications across three diverse ï¬elds (computer science, material sciences, and physics) and deï¬nes three broad en- tity types (Task, Material and Process). For this task, TagLM increased F1 on the development set by 4.12% (from 49.93 to to 54.05%) for entity ex- traction over our baseline without LM embeddings and it was a major component in our winning sub- mission to ScienceIE, Scenario 1 (Ammar et al., 2017). We conclude that LM embeddings can im- prove the performance of a sequence tagger even when the data comes from a different domain.
# 4 Related work | 1705.00108#29 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 30 | # 4 Related work
Unlabeled data. TagLM was inspired by the widespread use of pre-trained word embeddings in supervised sequence tagging models. Besides pre-trained word embeddings, our method is most In- closely related to Li and McCallum (2005). stead of using a LM, Li and McCallum (2005) uses a probabilistic generative model to infer context- sensitive latent variables for each token, which are then used as extra features in a supervised CRF tagger (Lafferty et al., 2001). Other semi- supervised learning methods for structured pre- diction problems include co-training (Blum and Mitchell, 1998; Pierce and Cardie, 2001), expec- tation maximization (Nigam et al., 2000; Mohit and Hwa, 2005), structural learning (Ando and Zhang, 2005) and maximum discriminant func- tions (Suzuki et al., 2007; Suzuki and Isozaki, 2008). It is easy to combine TagLM with any of the above methods by including LM embed- dings as additional features in the discriminative components of the model (except for expectation maximization). A detailed discussion of semi- supervised learning methods in NLP can be found
4A similar experiment for the Chunking task did not improve F1 so this conclusion is task dependent. 5https://scienceie.github.io/
# (Sogaara 2OTS). | 1705.00108#30 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 31 | 4A similar experiment for the Chunking task did not improve F1 so this conclusion is task dependent. 5https://scienceie.github.io/
# (Sogaara 2OTS).
in (Søgaard, 2013).
Melamud et al. (2016) learned a context en- coder from unlabeled data with an objective func- tion similar to a bi-directional LM and applied it to several NLP tasks closely related to the unlabeled objective function: sentence completion, lexical substitution and word sense disambiguation.
LM embeddings are related to a class of meth- ods (e.g., Le and Mikolov, 2014; Kiros et al., 2015; Hill et al., 2016) for learning sentence and document encoders from unlabeled data, which can be used for text classiï¬cation and textual en- tailment among other tasks. Dai and Le (2015) pre-trained LSTMs using language models and se- quence autoencoders then ï¬ne tuned the weights for classiï¬cation tasks. In contrast to our method that uses unlabeled data to learn token-in-context embeddings, all of these methods use unlabeled data to learn an encoder for an entire text sequence (sentence or document). | 1705.00108#31 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 32 | Neural language models. LMs have always been a critical component in statistical machine translation systems (Koehn, 2009). Recently, neu- ral LMs (Bengio et al., 2003; Mikolov et al., 2010) have also been integrated in neural machine trans- lation systems (e.g., Kalchbrenner and Blunsom, 2013; Devlin et al., 2014) to score candidate trans- lations. In contrast, TagLM uses neural LMs to encode words in the input sequence.
Unlike forward LMs, bidirectional LMs have received little prior attention. Most similar to our formulation, Peris and Casacuberta (2015) used a bidirectional neural LM in a statistical ma- chine translation system for instance selection. They tied the input token embeddings and soft- max weights in the forward and backward direc- tions, unlike our approach which uses two distinct models without any shared parameters. Frinken et al. (2012) also used a bidirectional n-gram LM for handwriting recognition. | 1705.00108#32 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 33 | Interpreting RNN states. Recently, there has been some interest in interpreting the activations of RNNs. Linzen et al. (2016) showed that sin- gle LSTM units can learn to predict singular-plural distinctions. Karpathy et al. (2015) visualized character level LSTM states and showed that indi- vidual cells capture long-range dependencies such as line lengths, quotes and brackets. Our work complements these studies by showing that LM states are useful for downstream tasks as a way
of interpreting what they learn.
Other sequence tagging models. Current state of the art results in sequence tagging problems are based on bidirectional RNN models. However, many other sequence tagging models have been proposed in the literature for this class of problems (e.g., Lafferty et al., 2001; Collins, 2002). LM em- beddings could also be used as additional features in other models, although it is not clear whether the model complexity would be sufï¬cient to effec- tively make use of them.
# 5 Conclusion | 1705.00108#33 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 34 | # 5 Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method signiï¬- cantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the base- line model is trained on a large number of labeled examples.
# Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substan- tial improvements in the ï¬nal version.
# References
Waleed Ammar, Matthew E. Peters, Chandra Bhaga- vatula, and Russell Power. 2017. The AI2 sys- tem at SemEval-2017 Task 10 (ScienceIE): semi- supervised end-to-end entity and relation extraction. In ACL workshop (SemEval).
Rie Kubota Ando and Tong Zhang. 2005. A high- performance semi-supervised learning method for text chunking. In ACL. | 1705.00108#34 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 35 | Rie Kubota Ando and Tong Zhang. 2005. A high- performance semi-supervised learning method for text chunking. In ACL.
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. In JMLR.
Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In COLT.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2014. One bil- lion word benchmark for measuring progress in sta- tistical language modeling. CoRR abs/1312.3005.
Jason Chiu and Eric Nichols. 2016. Named entity In recognition with bidirectional LSTM-CNNs. TACL.
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In SSST@EMNLP.
Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In EMNLP. | 1705.00108#35 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 36 | Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In EMNLP.
Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. In JMLR.
Andrew M. Dai and Quoc V. Le. 2015. supervised sequence learning. In NIPS.
Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In ACL.
Volkmar Frinken, Alicia Forn´es, Josep Llad´os, and language Jean-Marc Ogier. 2012. In Joint IAPR model for handwriting recognition. International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syn- tactic Pattern Recognition (SSPR).
Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. CoRR abs/1611.01587. | 1705.00108#36 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 37 | Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In HLT-NAACL.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9.
Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling. CoRR abs/1602.02410.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.
Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. In ICLR workshop.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.
Jamie Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS.
Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press. | 1705.00108#37 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 38 | Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.
John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random ï¬elds: Prob- abilistic models for segmenting and labeling se- quence data. In ICML.
Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL-HLT.
Quoc V. Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In ICML.
Wei Li and Andrew McCallum. 2005. Semi-supervised sequence modeling with syntactic topic models. In AAAI.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In TACL.
Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Za- iqing Nie. 2015. Joint entity recognition and disam- biguation. In EMNLP.
Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In ACL. | 1705.00108#38 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 39 | Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In ACL.
Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional lstm. In CoNLL.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.
Behrang Mohit and Rebecca Hwa. 2005. Syntax-based semi-supervised named entity tagging. In ACL.
Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classiï¬cation from labeled and unlabeled documents using em. Machine learning .
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. | 1705.00108#39 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 40 | Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
´Alvaro Peris and Francisco Casacuberta. 2015. A bidi- rectional recurrent neural language model for ma- chine translation. Procesamiento del Lenguaje Nat- ural .
David Pierce and Claire Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In EMNLP.
Lev-Arie Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recogni- tion. In CoNLL.
Hasim Sak, Andrew W. Senior, and Franoise Beaufays. 2014. Long short-term memory recurrent neural network architectures for large scale acoustic mod- eling. In INTERSPEECH.
Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunk- ing. In CoNLL/LLL.
Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In CoNLL. | 1705.00108#40 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1705.00108 | 41 | Anders Søgaard. 2013. Semi-supervised learning and domain adaptation in natural language processing. Synthesis Lectures on Human Language Technolo- gies .
Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL.
Jun Suzuki, Akinori Fujino, and Hideki Isozaki. 2007. Semi-supervised structured output learning based on a hybrid generative and discriminative approach. In EMNLP-CoNLL.
Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga- word scale unlabeled data. In ACL.
Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. In ICLR. | 1705.00108#41 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a
standard component of neural network architectures for NLP tasks. However, in
most cases, the recurrent network that operates on word-level representations
to produce context sensitive representations is trained on relatively little
labeled data. In this paper, we demonstrate a general semi-supervised approach
for adding pre- trained context embeddings from bidirectional language models
to NLP systems and apply it to sequence labeling tasks. We evaluate our model
on two standard datasets for named entity recognition (NER) and chunking, and
in both cases achieve state of the art results, surpassing previous systems
that use other forms of transfer or joint learning with additional labeled data
and task specific gazetteers. | http://arxiv.org/pdf/1705.00108 | Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power | cs.CL | To appear in ACL 2017 | null | cs.CL | 20170429 | 20170429 | [] |
1704.08795 | 0 | 7 1 0 2
l u J 2 2 ] L C . s c [
2 v 5 9 7 8 0 . 4 0 7 1 : v i X r a
# Mapping Instructions and Visual Observations to Actions with Reinforcement Learning
# Dipendra Misraâ , John Langfordâ¡, and Yoav Artziâ
â Dept. of Computer Science and Cornell Tech, Cornell University, New York, NY 10044 {dkm, yoav}@cs.cornell.edu
# â¡ Microsoft Research, New York, NY 10011 [email protected]
# Abstract
We propose to directly map raw visual ob- servations and text input to actions for in- struction execution. While existing ap- proaches assume access to structured envi- ronment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about lin- guistic and visual input. We use reinforce- ment learning in a contextual bandit set- ting to train a neural network agent. To guide the agentâs exploration, we use re- ward shaping with different forms of su- pervision. Our approach does not re- quire intermediate representations, plan- ning procedures, or training different mod- els. We evaluate in a simulated environ- ment, and show signiï¬cant improvements over supervised learning and common re- inforcement learning variants.
# Introduction | 1704.08795#0 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 1 | # Introduction
An agent executing natural language instructions requires robust understanding of language and its environment. Existing approaches addressing this problem assume structured environment represen- tations (e.g.,. Chen and Mooney, 2011; Mei et al., 2016), or combine separately trained models (e.g., Matuszek et al., 2010; Tellex et al., 2011), includ- ing for language understanding and visual reason- ing. We propose to directly map text and raw im- age input to actions with a single learned model. This approach offers multiple beneï¬ts, such as not requiring intermediate representations, plan- ning procedures, or training multiple models.
Figure 1 illustrates the problem in the Blocks environment (Bisk et al., 2016). The agent ob- serves the environment as an RGB image using a camera sensor. Given the RGB input, the agent
North | @ wosefoas i & South
Put the Toyota block in the same row as the SRI block, in the ï¬rst open space to the right of the SRI block Move Toyota to the immediate right of SRI, evenly aligned and slightly separated Move the Toyota block around the pile and place it just to the right of the SRI block Place Toyota block just to the right of The SRI Block Toyota, right side of SRI | 1704.08795#1 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 2 | Figure 1: Instructions in the Blocks environment. The instructions all describe the same task. Given the ob- served RGB image of the start state (large image), our goal is to execute such instructions. In this task, the direct-line path to the target position is blocked, and the agent must plan and move the Toyota block around. The small image marks the target and an example path, which includes 34 steps. must recognize the blocks and their layout. To un- derstand the instruction, the agent must identify the block to move (Toyota block) and the destina- tion (just right of the SRI block). This requires solving semantic and grounding problems. For example, consider the topmost instruction in the ï¬gure. The agent needs to identify the phrase re- ferring to the block to move, Toyota block, and ground it. It must resolve and ground the phrase SRI block as a reference position, which is then modiï¬ed by the spatial meaning recovered from the same row as or ï¬rst open space to the right of, to identify the goal position. Finally, the agent needs to generate actions, for example moving the Toyota block around obstructing blocks.
To address these challenges with a single model, | 1704.08795#2 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 3 | To address these challenges with a single model,
we design a neural network agent. The agent exe- cutes instructions by generating a sequence of ac- tions. At each step, the agent takes as input the instruction text, observes the world as an RGB im- age, and selects the next action. Action execution changes the state of the world. Given an obser- vation of the new world state, the agent selects the next action. This process continues until the agent indicates execution completion. When se- lecting actions, the agent jointly reasons about its observations and the instruction text. This enables decisions based on close interaction between ob- servations and linguistic input. | 1704.08795#3 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 4 | We train the agent with different levels of su- pervision, including complete demonstrations of the desired behavior and annotations of the goal state only. While the learning problem can be eas- ily cast as a supervised learning problem, learning only from the states observed in the training data results in poor generalization and failure to recover from test errors. We use reinforcement learn- ing (Sutton and Barto, 1998) to observe a broader set of states through exploration. Following recent work in robotics (e.g., Levine et al., 2016; Rusu et al., 2016), we assume the training environment, in contrast to the test environment, is instrumented and provides access to the state. This enables a simple problem reward function that uses the state and provides positive reward on task completion only. This type of reward offers two important ad- vantages: (a) it is a simple way to express the ideal agent behavior we wish to achieve, and (b) it cre- ates a platform to add training data information. | 1704.08795#4 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 5 | We use reward shaping (Ng et al., 1999) to ex- ploit the training data and add to the reward ad- ditional information. The modularity of shap- ing allows varying the amount of supervision, for example by using complete demonstrations for only a fraction of the training examples. Shap- ing also naturally associates actions with imme- diate reward. This enables learning in a contex- tual bandit setting (Auer et al., 2002; Langford and Zhang, 2007), where optimizing the immedi- ate reward is sufï¬cient and has better sample com- plexity than unconstrained reinforcement learn- ing (Agarwal et al., 2014). | 1704.08795#5 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 6 | We evaluate with the block world environment and data of Bisk et al. (2016), where each instruc- tion moves one block (Figure 1). While the orig- inal task focused on source and target prediction only, we build an interactive simulator and formulate the task of predicting the complete sequence of actions. At each step, the agent must select be- tween 81 actions with 15.4 steps required to com- plete a task on average, signiï¬cantly more than existing environments (e.g., Chen and Mooney, 2011). Our experiments demonstrate that our re- inforcement learning approach effectively reduces execution error by 24% over standard supervised learning and 34-39% over common reinforcement learning techniques. Our simulator, code, models, and execution videos are available at: https: //github.com/clic-lab/blocks.
# 2 Technical Overview | 1704.08795#6 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 7 | Task Let be the set of all instructions, S the set of all world states, and A the set of all actions. An instruction Z ⬠#& is a sequence (1,-...,p), where each x; is a token. The agent executes instructions by generating a sequence of actions, and indicates execution completion with the special action STOP. Action execution mod- ifies the world state following a transition func- tion T : S x A â S. The execution é of an instruction Z starting from s; is an m-length se- quence ((s1,a1),...,(Sm,@m)), where s; ⬠S, aj ⬠A, T(sj,aj) = sj41 and am = STOP. In Blocks (Figure 1), a state specifies the positions of all blocks. For each action, the agent moves a single block on the plane in one of four direc- ions (north, south, east, or west). There are 20 blocks, and 81 possible actions at each step, in- cluding STOP. For example, to correctly execute the instructions in the figure, the agentâs likely first action is TOYOTA-WEST, which moves the Toyota block one step west. Blocks can not move over or through other blocks. Model The agent observes the | 1704.08795#7 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 8 | likely first action is TOYOTA-WEST, which moves the Toyota block one step west. Blocks can not move over or through other blocks. Model The agent observes the world state via a visual sensor (i.e., a camera). Given a world state s, the agent observes an RGB image I gen- erated by the function IMG(s). We distinguish be- ween the world state s and the agent context! 3, which includes the instruction, the observed image IMG(s), images of previous states, and the pre- vious action. To map instructions to actions, the agent reasons about the agent context § to generate a sequence of actions. At each step, the agent gen- erates a single action. We model the agent with a | 1704.08795#8 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 10 | neural network policy. At each step j, the network takes as input the current agent context Ësj, and pre- dicts the next action to execute aj. We formally deï¬ne the agent context and model in Section 4. Learning We assume access to training data with N examples {(¯x(i), s i=1, where ¯x(i) (i) is a start state, and ¯e(i) is is an instruction, s 1 (i) an execution demonstration of ¯x(i) starting at s 1 . We use policy gradient (Section 5) with reward shaping derived from the training data to increase learning speed and exploration effectiveness (Sec- tion 6). Following work in robotics (e.g., Levine et al., 2016), we assume an instrumented environ- ment with access to the world state to compute the reward during training only. We deï¬ne our ap- proach in general terms with demonstrations, but also experiment with training using goal states. Evaluation We evaluate task completion error i=1, where ¯x(i) is an on a test set {(¯x(i), s instruction, s is the goal state. We measure execution error as the distance between the ï¬nal execution state and s
# 3 Related Work | 1704.08795#10 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 11 | Learning to follow instructions was studied ex- tensively with structured environment represen- including with semantic parsing (Chen tations, and Mooney, 2011; Kim and Mooney, 2012, 2013; Artzi and Zettlemoyer, 2013; Artzi et al., 2014a,b; Misra et al., 2015, 2016), alignment models (Andreas and Klein, 2015), reinforcement learning (Branavan et al., 2009, 2010; Vogel and Jurafsky, 2010), and neural network models (Mei et al., 2016). In contrast, we study the problem of an agent that takes as input instructions and raw vi- sual input. Instruction following with visual input was studied with pipeline approaches that use sep- arately learned models for visual reasoning (Ma- tuszek et al., 2010, 2012; Tellex et al., 2011; Paul et al., 2016). Rather than decomposing the prob- lem, we adopt a single-model approach and learn from instructions paired with demonstrations or goal states. Our work is related to Sung et al. (2015). While they use sensory input to select and adjust a trajectory observed during training, we are not restricted to training sequences. Executing instructions in non-learning settings has also re- ceived signiï¬cant attention (e.g., Winograd, 1972; Webber et al., 1995; MacMahon et al., 2006). | 1704.08795#11 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 12 | Our work is related to a growing interest in problems that combine language and vision, including visual question answering (e.g., Antol et al., 2015; Andreas et al., 2016b,a), caption gen- eration (e.g., Chen et al., 2015, 2016; Xu et al., 2015), and visual reasoning (Johnson et al., 2016; Suhr et al., 2017). We address the prediction of the next action given a world image and an instruction. Reinforcement learning with neural networks has been used for various NLP tasks, including text-based games (Narasimhan et al., 2015; He et al., 2016), information extraction (Narasimhan et al., 2016), co-reference resolution (Clark and Manning, 2016), and dialog (Li et al., 2016). | 1704.08795#12 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 13 | Neural network reinforcement learning tech- niques have been recently studied for behavior learning tasks, including playing games (Mnih et al., 2013, 2015, 2016; Silver et al., 2016) and solving memory puzzles (Oh et al., 2016). In con- trast to this line of work, our data is limited. Ob- serving new states in a computer game simply re- quires playing it. However, our agent also consid- ers natural language instructions. As the set of in- structions is limited to the training data, the set of agent contexts seen during learning is constrained. We address the data efï¬ciency problem by learn- ing in a contextual bandit setting, which is known to be more tractable (Agarwal et al., 2014), and us- ing reward shaping to increase exploration effec- tiveness. Zhu et al. (2017) address generalization of reinforcement learning to new target goals in vi- sual search by providing the agent an image of the goal state. We address a related problem. How- ever, we provide natural language and the agent must learn to recognize the goal state. | 1704.08795#13 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 14 | Reinforcement learning is extensively used in robotics (Kober et al., 2013). Similar to recent work on learning neural network policies for robot control (Levine et al., 2016; Schulman et al., 2015; Rusu et al., 2016), we assume an instrumented training environment and use the state to compute rewards during learning. Our approach adds the ability to specify tasks using natural language.
# 4 Model
We model the agent policy Ï with a neural net- work. The agent observes the instruction and an RGB image of the world. Given a world state s, the image I is generated using the function IMG(s). The instruction execution is generated one step at a time. At each step j, the agent observes an image Ij of the current world state sj and the instruction ¯x, predicts the action aj, and executes it to transition to the next state sj+1.
a | | | Visual State vio ®: Place the Toyota east of SRI LSTM â+| â+| |. 4. in 1, 1 | Instruction Repri TOYOTA-SOUTH == Previous Action ag ent Context S10 Task Specific sgh' = Gogh" == SoftMax Layers TOYOTAâ+ââSOUTH TOYOTA-SOUTH Action aio | 1704.08795#14 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 15 | Figure 2: Illustration of the policy architecture showing the 10th step in the execution of the instruction Place the Toyota east of SRI in the state from Figure 1. The network takes as input the instruction ¯x, image of the current state I10, images of previous states I8 and I9 (with K = 2), and the previous action a9. The text and images are embedded with LSTM and CNN. The actions are selected with the task speciï¬c multi-layer perceptron. | 1704.08795#15 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 16 | This process continues until STOP is predicted and the agent stops, indicating instruction completion. The agent also has access to K images of previ- ous states and the previous action to distinguish between different stages of the execution (Mnih et al., 2015). Figure 2 illustrates our architecture. the agent consid- step j, ers an agent context Ësj, which is a tuple (¯x, Ij, Ijâ1, . . . , IjâK, ajâ1), where ¯x is the natu- ral language instruction, Ij is an image of the cur- rent world state, the images Ijâ1, . . . , IjâK repre- sent K previous states, and ajâ1 is the previous action. The agent context includes information about the current state and the execution. Consid- ering the previous action ajâ1 allows the agent to avoid repeating failed actions, for example when trying to move in the direction of an obstacle. In Figure 2, the agent is given the instruction Place the Toyota east of SRI, is at the 10-th execution step, and considers K = 2 previous images.
# Formally,2 | 1704.08795#16 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 17 | # Formally,2
sual state v (Mnih et al., 2013). The last ac- tion ajâ1 is embedded with the function Ïa(ajâ1). The vectors vj, ¯x, and Ïa(ajâ1) are concatenated to create the agent context vector representation Ësj = [vj, ¯x, Ïa(ajâ1)].
To compute the action to execute, we use a feed- forward perceptron that decomposes according to the domain actions. This computation selects the next action conditioned on the instruction text and observations from both the current world state and recent history. In the block world domain, where actions decompose to selecting the block to move and the direction, the network computes block and direction probabilities. Formally, we decompose an action a to direction aD and block aB. We com- pute the feedforward network:
h1 = max(W(1)Ësj + b(1), 0) hD = W(D)h1 + b(D) hB = W(B)h1 + b(B) , | 1704.08795#17 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 18 | We generate continuous vector representations for all inputs, and jointly reason about both text and image modalities to select the next action. We use a recurrent neural network (RNN; Elman, 1990) with a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) recurrence to map the instruction F = (21,...,%p) to a vector representation x. Each token 2; is mapped to a fixed dimensional vector with the learned embedding function 7(x;). The instruc- tion representation X is computed by applying the LSTM recurrence to generate a sequence of hid- den states 1; = LSTM(w(a;), 1,1), and comput- ing the mean X = ty 1; (Narasimhan et al., 2015). The current image J; and previous im- ages I;_1,...,lj;-% are concatenated along the channel dimension and embedded with a convolu- tional neural network (CNN) to generate the viand the action probability is a product of the com- ponent probabilities: P (aD P (aB | 1704.08795#18 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 19 | At the beginning of execution, the ï¬rst action a0 is set to the special value NONE, and previous im- ages are zero matrices. The embedding function Ï is a learned matrix. The function Ïa concatenates the embeddings of aD jâ1 and aB jâ1, which are ob- tained from learned matrices, to compute the em- bedding of ajâ1. The model parameters θ include W(1), b(1), W(D), b(D), W(B), b(B), the param- eters of the LSTM recurrence, the parameters of the convolutional network CNN, and the embed- ding matrices. In our experiments (Section 7), all parameters are learned without external resources.
# 5 Learning
2We use bold-face capital letters for matrices and bold- face lowercase letters for vectors. Computed input and state representations use bold versions of the symbols. For exam- ple, ¯x is the computed representation of an instruction ¯x.
We use policy gradient for reinforcement learn- ing (Williams, 1992) to estimate the parameters θ of the agent policy. We assume access to a
(i) 1 , ¯e(i))}N | 1704.08795#19 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 20 | training set of N examples {(¯x(i), s i=1, (i) where ¯x(i) is an instruction, s 1 is a start state, and ¯e(i) is an execution demonstration starting from (i) 1 of instruction ¯x(i). The main learning chal- s lenge is learning how to execute instructions given raw visual input from relatively limited data. We learn in a contextual bandit setting, which provides theoretical advantages over general reinforcement learning. In Section 8, we verify this empirically. Reward Function The instruction execution problem deï¬nes a simple problem reward to mea- sure task completion. The agent receives a posi- tive reward when the task is completed, a negative reward for incorrect completion (i.e., STOP in the wrong state) and actions that fail to execute (e.g., when the direction is blocked), and a small penalty otherwise, which induces a preference for shorter trajectories. To compute the reward, we assume access to the world state. This learning setup is inspired by work in robotics, where it is achieved by instrumenting the training environment (Sec- tion 3). The agent, on the other hand, only uses the agent context (Section 4). When | 1704.08795#20 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 21 | robotics, where it is achieved by instrumenting the training environment (Sec- tion 3). The agent, on the other hand, only uses the agent context (Section 4). When deployed, the system relies on visual observations and natural language instructions only. The reward function R(i) : S à A â R is deï¬ned for each training ex- (i) 1 , ¯e(i)), i = 1 . . . N : ample (¯x(i), s where m(i) is the length of ¯e(i). | 1704.08795#21 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 22 | The reward function does not provide interme- diate positive feedback to the agent for actions that bring it closer to its goal. When the agent explores randomly early during learning, it is unlikely to encounter the goal state due to the large number of steps required to execute tasks. As a result, the agent does not observe positive reward and fails to learn. In Section 6, we describe how reward shaping, a method to augment the reward with ad- ditional information, is used to take advantage of the training data and address this challenge. Policy Gradient Objective We adapt the policy gradient objective defined by Sutton et al. (1999) to multiple starting states and reward functions: N 1 yi
1 N Ï (s(i) V (i) J = i=1 | 1704.08795#22 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 23 | 1 ) , (i) (i) 1 ) is the value given by R(i) start- where V Ï (s (i) ing from s 1 under the policy Ï. The summation expresses the goal of learning a behavior parameterized by natural language instructions. Contextual Bandit Setting In contrast to most policy gradient approaches, we apply the objec- tive to a contextual bandit setting where immedi- ate reward is optimized rather than total expected reward. The primary theoretical advantage of con- textual bandits is much tighter sample complexity bounds when comparing upper bounds for contex- tual bandits (Langford and Zhang, 2007) even with an adversarial sequence of contexts (Auer et al., 2002) to lower bounds (Krishnamurthy et al., 2016) or upper bounds (Kearns et al., 1999) for total reward maximization. This property is par- ticularly suitable for the few-sample regime com- mon in natural language problems. While re- inforcement learning with neural network poli- cies is known to require large amounts of train- ing data (Mnih et al., 2015), the limited number of training sentences constrains the diversity and volume of agent contexts we can observe during training. Empirically, this translates to poor results when optimizing the total reward (REINFORCE baseline in Section 8). To derive the approximate gradient, we use the likelihood ratio method: | 1704.08795#23 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 24 | N Vod = WL Ee log (8,a)R(s,a)] ,
where reward is computed from the world state but policy is learned on the agent context. We approx- imate the gradient using sampling.
This training regime, where immediate reward optimization is sufï¬cient to optimize policy pa- rameters θ, is enabled by the shaped reward we introduce in Section 6. While the objective is de- signed to work best with the shaped reward, the al- gorithm remains the same for any choice of reward deï¬nition including the original problem reward or several possibilities formed by reward shaping. Entropy Penalty We observe that early in train- ing, the agent is overwhelmed with negative re- ward and rarely completes the task. This results in the policy Ï rapidly converging towards a subopti- mal deterministic policy with an entropy of 0. To delay premature convergence we add an entropy term to the objective (Williams and Peng, 1991; Mnih et al., 2016). The entropy term encourages a uniform distribution policy, and in practice stimu- lates exploration early during training. The regu- larized gradient is:
# VoT =
1 N i=1 E[âθ log Ï(Ës, a)R(i)(s, a) + λâθH(Ï(Ës, ·))] ,
Algorithm 1 Policy gradient learning | 1704.08795#24 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 25 | Algorithm 1 Policy gradient learning
i=1, learning rate µ, epochs T , horizon J, and entropy regularization term λ. Deï¬nitions: IMG(s) is a camera sensor that reports an RGB image of state s. Ï is a probabilistic neural network policy parameterized by θ, as described in Section 4. EXECUTE(s, a) executes the action a at the state s, and returns the new state. R(i) is the reward function for example i. ADAM(â) applies a per-feature learning rate to the gradient â (Kingma and Ba, 2014).
Output: Policy parameters θ.
1: » Iterate over the training data. 2: for t = 1 to T , i = 1 to N do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:
ioKy.-10=0 dao = NONE, s1 = s; j=l â_» Rollout up to episode limit. while j < J anda; # STOP do | 1704.08795#25 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 26 | ioKy.-10=0 dao = NONE, s1 = s; j=l â_» Rollout up to episode limit. while j < J anda; # STOP do
» Observe world and construct agent context. Ij = IMG(sj) Ësj = (¯x(i), Ij, Ijâ1, . . . , IjâK , ad » Sample an action from the policy. aj â¼ Ï(Ësj, a) sj+1 = EXECUTE(sj, aj) » Compute the approximate gradient. âj â âθ log Ï(Ësj, aj)R(i)(sj, aj) +λâθH(Ï(Ësj, ·))
j+ = 1
16: 17:
17; 00+ pAvdam(4 1), Aj)
# 18: return θ | 1704.08795#26 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 27 | 17; 00+ pAvdam(4 1), Aj)
# 18: return θ
where H(Ï(Ës, ·)) is the entropy of Ï given the agent context Ës, λ is a hyperparameter that con- trols the strength of the regularization. While the entropy term delays premature convergence, it does not eliminate it. Similar issues are observed for vanilla policy gradient (Mnih et al., 2016). Algorithm Algorithm 1 shows our learning al- gorithm. We iterate over the data T times. In each (i) 1 , ¯e(i)), epoch, for each training example (¯x(i), s i = 1 . . . N , we perform a rollout using our policy to generate an execution (lines 7 - 16). The length of the rollout is bound by J, but may be shorter if the agent selected the STOP action. At each step j, the agent updates the agent context Ësj (lines 9 - 10), samples an action from the policy Ï (line 12), and executes it to generate the new world state sj+1 (line 13). The gradient is approximated us- ing the sampled action with the computed reward R(i)(sj, aj) (line 15). Following each rollout, we update the parameters θ with the mean of the gra- dients using ADAM (Kingma and Ba, 2014). | 1704.08795#27 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 28 | # 6 Reward Shaping
Reward shaping is a method for transforming a reward function by adding a shaping term to the
ey Ee. | al [di [iu |
Figure 3: Visualization of the shaping potentials for two tasks. We show demonstrations (blue arrows), but omit instructions. To visualize the potentials intensity, we assume only the target block can be moved, while rewards and potentials are computed for any block movement. We illustrate the sparse problem reward (left column) as a potential function and consider only its positive component, which is focused on the goal. The middle column adds the distance-based potential. The right adds both potentials.
problem reward. The goal is to generate more in- formative updates by adding information to the re- ward. We use this method to leverage the train- ing demonstrations, a common form of supervi- sion for training systems that map language to ac- tions. Reward shaping allows us to fully use this type of supervision in a reinforcement learning framework, and effectively combine learning from demonstrations and exploration. | 1704.08795#28 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 29 | Adding an arbitrary shaping term can change the optimality of policies and modify the orig- inal problem, for example by making bad poli- cies according to the problem reward optimal ac- cording to the shaped function.3 Ng et al. (1999) and Wiewiora et al. (2003) outline potential-based terms that realize sufï¬cient conditions for safe shaping.4 Adding a shaping term is safe if the order of policies according to the shaped reward is identical to the order according to the original problem reward. While safe shaping only applies to optimizing the total reward, we show empiri- cally the effectiveness of the safe shaping terms we design in a contextual bandit setting.
We introduce two shaping terms. The ï¬nal shaped reward is a sum of them and the problem reward. Similar to the problem reward, we deï¬ne example-speciï¬c shaping terms. We modify the reward function signature as required. Distance-based Shaping (F1) The ï¬rst shaping term measures if the agent moved closer to the goal state. We design it to be a safe potential-based
3For example, adding a shaping term F = âR will result in a shaped reward that is always 0, and any policy will be trivially optimal with respect to it. | 1704.08795#29 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 31 | F} (3,43, 8741) = 01 (8;41) ~ 01 (s)) - The potential 6 (s) is proportional to the nega- (i) tive distance from the goal state sjâ. Formally, o((s) = â1||s â 86), where 7 is a constant scaling factor, and ||.|| is a distance metric. In the block world, the distance between two states is the sum of the Euclidean distances between the posi- tions of each block in the two states, and 77 is the inverse of block width. The middle column in Fig- ure 3 visualizes the potential 0. Trajectory-based Shaping (2) Distance- based shaping may lead the agent to sub-optimal states, for example when an obstacle blocks the direct path to the goal state, and the agent must temporarily increase its distance from the goal to bypass it. We incorporate complete trajectories by using a simplification of the shaping term introduced by Brys et al. (2015). Unlike F), it requires access to the previous state and action. It is based on the look-back advice shaping term of Wiewiora et al. (2003), who introduced safe potential-based shaping that considers the previous state and action. The | 1704.08795#31 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 32 | look-back advice shaping term of Wiewiora et al. (2003), who introduced safe potential-based shaping that considers the previous state and action. The second term is: FS? (8j-1,.05~-1, 83,03) = OY? (83, 4;) 69 (831,451) - Given @) = ((81,@1),---;(8m,@m)), to com- pute the potential oS? (s, a), we identify the closest state s; ine to s. If y||s; â s|| < Landa; =a, o$(s,a) = 1.0, else 0 (s, a) = âd¢, where d¢ is a penalty parameter. We use the same distance computation and parameter 7 as in Ff. When the agent is in a state close to a demonstration state, this term encourages taking the action taken in the related demonstration state. The right column in Figure 3 visualizes the effect of the potential o§. | 1704.08795#32 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 33 | 2 (sj, aj)âÏ(i)
# 7 Experimental Setup
Environment We use the environment of Bisk et al. (2016). The original task required predicting the source and target positions for a single block given an instruction. In contrast, we address the task of moving blocks on the plane to execute in- structions given visual input. This requires gen- erating the complete sequence of actions needed to complete the instruction. The environment con- tains up to 20 blocks marked with logos or digits. Each block can be moved in four directions. In- cluding the STOP action, in each step, the agent selects between 81 actions. The set of actions is constant and is not limited to the blocks present.
The transition function is deterministic. The size of each block step is 0.04 of the board size. The agent observes the board from above. We adopt a relatively challenging setup with a large action space. While a simpler setup, for example decom- posing the problem to source and target prediction and using a planner, is likely to perform better, we aim to minimize task-speciï¬c assumptions and en- gineering of separate modules. However, to better understand the problem, we also report results for the decomposed task with a planner. | 1704.08795#33 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 34 | Data Bisk et al. (2016) collected a corpus of in- structions paired with start and goal states. Fig- ure 1 shows example instructions. The original data includes instructions for moving one block or multiple blocks. Single-block instructions are rel- atively similar to navigation instructions and re- ferring expressions. While they present much of the complexity of natural language understanding and grounding, they rarely display the planning complexity of multi-block instructions, which are beyond the scope of this paper. Furthermore, the original data does not include demonstrations. While generating demonstrations for moving a single block is straightforward, disambiguating action ordering when multiple blocks are moved is challenging. Therefore, we focus on instructions where a single block changes its position between the start and goal states, and restrict demonstra- tion generation to move the changed block. The remaining data, and the complexity it introduces, provide an important direction for future work. | 1704.08795#34 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 35 | To create demonstrations, we compute the shortest paths. While this process may introduce noise for instructions that specify speciï¬c trajecto- ries (e.g., move SRI two steps north and . . . ) rather than only describing the goal state, analysis of the data shows this issue is limited. Out of 100 sam- pled instructions, 92 describe the goal state rather than the trajectory. A secondary source of noise is due to discretization of the state space. As a re- sult, the agent often can not reach the exact target position. The demonstrations error illustrates this problem (Table 3). To provide task completion re- ward during learning, we relax the state compari- son, and consider states to be equal if the sum of block distances is under the size of one block.
The corpus includes 11,871/1,719/3,177 in- structions for training/development/testing. Ta- ble 1 shows corpus statistic compared to the com- monly used SAIL navigation corpus (MacMahon
Number of instructions Mean instruction length Vocabulary Mean trajectory length SAIL 3,237 7.96 563 3.12 Blocks 16,767 15.27 1,426 15.4
Table 1: Corpus statistics for the block environment we use and the SAIL navigation domain. | 1704.08795#35 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 36 | et al., 2006; Chen and Mooney, 2011). While the SAIL agent only observes its immediate surround- ings, overall the blocks domain provides more complex instructions. Furthermore, the SAIL en- vironment includes only 400 states, which is in- sufï¬cient for generalization with vision input. We compare to other data sets in Appendix D. Evaluation We evaluate task completion error as the sum of Euclidean distances for each block between its position at the end of the execution and in the gold goal state. We divide distances by block size to normalize for the image size. In contrast, Bisk et al. (2016) evaluate the selection of the source and target positions independently. Systems We report performance of ablations, the upper bound of following the demonstrations (Demonstrations), and ï¬ve baselines: (a) STOP: the the agent immediately stops, (b) RANDOM: agent takes random actions, (c) SUPERVISED: su- pervised learning with maximum-likelihood es- timate using demonstration state-action pairs, (d) DQN: deep Q-learning with both shaping terms (Mnih et al., 2015), and (e) REINFORCE: policy gradient with cumulative episodic reward | 1704.08795#36 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 37 | DQN: deep Q-learning with both shaping terms (Mnih et al., 2015), and (e) REINFORCE: policy gradient with cumulative episodic reward with both shaping terms (Sutton et al., 1999). Full system details are given in Appendix B. Parameters and Initialization Full details are in Appendix C. We consider K = 4 previous im- ages, and horizon length J = 40. We initialize our model with the SUPERVISED model. | 1704.08795#37 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 39 | Table 2 shows development results. We run each experiment three times and report the best result. The RANDOM and STOP baselines illustrate the task complexity of the task. Our approach, includ- ing both shaping terms in a contextual bandit set- ting, signiï¬cantly outperforms the other methods. SUPERVISED learning demonstrates lower perfor- mance. A likely explanation is test-time execution errors leading to unfamiliar states with poor later performance (Kakade and Langford, 2002), a form of the covariate shift problem. The low perfor- mance of REINFORCE and DQN illustrates the challenge of general reinforcement learning with limited data due to relatively high sample comAlgorithm Demonstrations Baselines STOP RANDOM SUPERVISED REINFORCE DQN Our Approach w/o Sup. Init w/o Prev. Action w/o F1 w/o F2 w/ Distance Reward Ensembles SUPERVISED REINFORCE DQN Our Approach Distance Error Min. Distance Mean Med. Mean Med. 0.30 0.35 0.30 0.35 5.95 15.3 4.65 5.57 6.04 3.60 3.78 3.95 4.33 3.74 | 1704.08795#39 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 41 | Table 2: Mean and median (Med.) development results.
Algorithm Demonstrations STOP RANDOM Ensembles SUPERVISED REINFORCE DQN Our Approach Distance Error Min. Distance Mean Med. Mean Med. 0.31 0.37 6.12 6.23 6.09 15.11 0.31 6.12 15.35 0.37 6.23 6.21 4.95 5.69 6.15 3.78 4.53 5.57 5.97 3.14 3.82 5.11 5.86 2.83 3.33 4.99 5.77 2.07
Table 3: Mean and median (Med.) test results.
plexity (Kearns et al., 1999; Krishnamurthy et al., 2016). We also report results using ensembles of the three models. | 1704.08795#41 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 42 | plexity (Kearns et al., 1999; Krishnamurthy et al., 2016). We also report results using ensembles of the three models.
We ablate different parts of our approach. Ab- lations of supervised initialization (our approach w/o sup. init) or the previous action (our ap- proach w/o prev. action) result in increase in er- ror. While the contribution of initialization is mod- est, it provides faster learning. On average, af- ter two epochs, we observe an error of 3.94 with initialization and 6.01 without. We hypothesize that the F2 shaping term, which uses full demon- strations, helps to narrow the gap at the end of learning. Without supervised initialization and F2, the error increases to 5.45 (the 0% point in Fig- ure 4). We observe the contribution of each shap- ing term and their combination. To study the bene- ï¬t of potential-based shaping, we experiment with a negative distance-to-goal reward. This reward replaces the problem reward and encourages get- ting closer to the goal (our approach w/distance reward). With this reward, learning fails to con- verge, leading to a relatively high error.
Figure 4 shows our approach with varying | 1704.08795#42 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 43 | Figure 4 shows our approach with varying
5.5 r o r r E n a e 5 4.5 M 4 3.5 0 10 20 30 40 50 60 70 80 90 100 % Demonstrations
Figure 4: Mean distance error as a function of the ratio of training examples that include complete trajectories. The rest of the data includes the goal state only.
amount of supervision. We remove demonstra- tions from both supervised initialization and the F2 shaping term. For example, when only 25% are available, only 25% of the data is available for initialization and the F2 term is only present for this part of the data. While some demonstrations are necessary for effective learning, we get most of the beneï¬t with only 12.5%.
Table 3 provides test results, using the ensem- bles to decrease the risk of overï¬tting the develop- ment. We observe similar trends to development result with our approach outperforming all base- lines. The remaining gap to the demonstrations upper bound illustrates the need for future work. | 1704.08795#43 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 44 | To understand performance better, we measure minimal distance (min. distance in Tables 2 and 3), the closest the agent got to the goal. We ob- serve a strong trend: the agent often gets close to the goal and fails to stop. This behavior is also reï¬ected in the number of steps the agent takes. While the mean number of steps in development demonstrations is 15.2, the agent generates on av- erage 28.7 steps, and 55.2% of the time it takes the maximum number of allowed steps (40). Test- ing on the training data shows an average 21.75 steps and exhausts the number of steps 29.3% of the time. The mean number of steps in training demonstrations is 15.5. This illustrates the chal- lenge of learning how to be behave at an absorbing state, which is observed relatively rarely during training. This behavior also shows in our video.5 We also evaluate a supervised learning variant that assumes a perfect planner.6 This setup is sim- ilar to Bisk et al. (2016), except using raw image input. It allows us to roughly understand how well the agent generates actions. We observe a mean error of 2.78 on the development set, an improve- ment of almost two points over supervised learn- ing | 1704.08795#44 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 45 | understand how well the agent generates actions. We observe a mean error of 2.78 on the development set, an improve- ment of almost two points over supervised learn- ing with our approach. This illustrates the com5https://github.com/clic-lab/blocks 6As there is no sequence of decisions, our reinforcement approach is not appropriate for the planner experiment. The architecture details are described in Appendix B. | 1704.08795#45 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 46 | plexity of the complete problem.
We conduct a shallow linguistic analysis to un- derstand the agent behavior with regard to dif- ferences in the language input. As expected, the agent is sensitive to unknown words. For instruc- tions without unknown words, the mean develop- ment error is 3.49. It increases to 3.97 for instruc- tions with a single unknown word, and to 4.19 for two.7 We also study the agent behavior when ob- serving new phrases composed of known words by looking at instructions with new n-grams and no unknown words. We observe no signiï¬cant corre- lation between performance and new bi-grams and tri-grams. We also see no meaningful correlation between instruction length and performance. Al- though counterintuitive given the linguistic com- plexities of longer instructions, it aligns with re- sults in machine translation (Luong et al., 2015).
# 9 Conclusions | 1704.08795#46 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 47 | # 9 Conclusions
We study the problem of learning to execute in- structions in a situated environment given only raw visual observations. Supervised approaches do not explore adequately to handle test time er- rors, and reinforcement learning approaches re- quire a large number of samples for good conver- gence. Our solution provides an effective combi- nation of both approaches: reward shaping to cre- ate relatively stable optimization in a contextual bandit setting, which takes advantage of a signal similar to supervised learning, with a reinforce- ment basis that admits substantial exploration and easy avenues for smart initialization. This com- bination is designed for a few-samples regime, as we address. When the number of samples is un- bounded, the drawbacks observed in this scenario for optimizing longer term reward do not hold.
# Acknowledgments
This research was supported by a Google Fac- ulty Award, an Amazon Web Services Research Grant, and a Schmidt Sciences Research Award. We thank Alane Suhr, Luke Zettlemoyer, and the anonymous reviewers for their helpful feedback, and Claudia Yan for technical help. We also thank the Cornell NLP group and the Microsoft Research Machine Learning NYC group for their support and insightful comments.
7This trend continues, although the number of instructions is too low (< 20) to be reliable. | 1704.08795#47 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 48 | 7This trend continues, although the number of instructions is too low (< 20) to be reliable.
References Alekh Agarwal, Daniel J. Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. 2014. Taming the monster: A fast and simple algorithm for contextual bandits. In Proceedings of the Inter- national Conference on Machine Learning.
Jacob Andreas and Dan Klein. 2015. Alignment- based compositional semantics for instruction fol- lowing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing. https://doi.org/10.18653/v1/D15-1138.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016a. Learning to compose neu- In Proceed- ral networks for question answering. ings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. https://doi.org/10.18653/v1/N16-1181.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016b. Neural module networks. In Conference on Computer Vision and Pattern Recog- nition. | 1704.08795#48 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 49 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016b. Neural module networks. In Conference on Computer Vision and Pattern Recog- nition.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question an- swering. In International Journal of Computer Vi- sion.
Yoav Artzi, Dipanjan Das, and Slav Petrov. 2014a. Learning compact lexicons for CCG semantic pars- ing. In Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing. https://doi.org/10.3115/v1/D14-1134.
Yoav Artzi, Maxwell Forbes, Kenton Lee, and Maya Cakmak. 2014b. Programming by demonstration with situated semantic parsing. In AAAI Fall Sym- posium Series.
Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for map- ping instructions to actions. Transactions of the Association of Computational Linguistics 1:49â62. http://aclweb.org/anthology/Q13-1005. | 1704.08795#49 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 50 | Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. 2002. The nonstochastic multi- armed bandit problem. SIAM J. Comput. 32(1):48â 77.
Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016. Natural language communication with robots. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. https://doi.org/10.18653/v1/N16-1089.
S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of
the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. http://aclweb.org/anthology/P09-1010.
S.R.K. Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to com- In Proceedings of the 48th Annual Meet- mands. ing of the Association for Computational Linguis- tics. http://aclweb.org/anthology/P10-1129. | 1704.08795#50 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 51 | Tim Brys, Anna Harutyunyan, Halit Bener Suay, Sonia Chernova, Matthew E. Taylor, and Ann Nowé. 2015. Reinforcement learning from demonstration through In Proceedings of the International Joint shaping. Conference on Artiï¬cial Intelligence.
David L. Chen and Raymond J. Mooney. 2011. Learn- ing to interpret natural language navigation instruc- tions from observations. In Proceedings of the Na- tional Conference on Artiï¬cial Intelligence.
Wenhu Chen, Aurélien Lucchi, and Thomas Hofmann. 2016. Bootstrap, review, decode: Using out-of- domain textual data to improve image captioning. CoRR abs/1611.05321.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO cap- tions: Data collection and evaluation server. CoRR abs/1504.00325.
Kevin Clark and D. Christopher Manning. 2016. Deep reinforcement learning for mention-ranking coref- In Proceedings of the 2016 Con- erence models. ference on Empirical Methods in Natural Language Processing. http://aclweb.org/anthology/D16-1245. | 1704.08795#51 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 52 | Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science 14:179â211.
Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with a natural lan- In Proceedings of the 54th guage action space. the Association for Compu- Annual Meeting of tational Linguistics (Volume 1: Long Papers). https://doi.org/10.18653/v1/P16-1153.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2016. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. CoRR abs/1612.06890.
Sham Kakade and John Langford. 2002. Approxi- mately optimal approximate reinforcement learning. In Machine Learning, Proceedings of the Nineteenth International Conference (ICML 2002), University of New South Wales, Sydney, Australia, July 8-12, 2002. | 1704.08795#52 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 53 | Michael Kearns, Yishay Mansour, and Andrew Y. Ng. 1999. A sparse sampling algorithm for near-optimal planning in large markov decision processes. In Proeceediings of the International Joint Conference on Artiï¬cial Intelligence.
Joohyun Kim and Raymond Mooney. 2012. Unsu- pervised PCFG induction for grounded language learning with highly ambiguous supervision. In Proceedings of the 2012 Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning. http://aclweb.org/anthology/D12-1040.
Joohyun Kim and Raymond Mooney. 2013. Adapt- ing discriminative reranking to grounded lan- In Proceedings of the 51st guage learning. the Association for Compu- Annual Meeting of tational Linguistics (Volume 1: Long Papers). http://aclweb.org/anthology/P13-1022.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations.
Jens Kober, J. Andrew Bagnell, and Jan Peters. 2013. In- Reinforcement learning in robotics: A survey. ternational Journal of Robotics Research 32:1238â 1274. | 1704.08795#53 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 54 | Akshay Krishnamurthy, Alekh Agarwal, and John Langford. 2016. PAC reinforcement learning with rich observations. In Advances in Neural Informa- tion Processing Systems.
John Langford and Tong Zhang. 2007. The epoch- greedy algorithm for multi-armed bandits with side In Advances in Neural Information information. Processing Systems 20, Proceedings of the Twenty- First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. 2016. End-to-end training of deep visuo- motor policies. Journal of Machine Learning Re- search 17.
Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep re- inforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empir- ical Methods in Natural Language Processing. http://aclweb.org/anthology/D16-1127.
and Christopher D. to approaches Manning. In attention-based neural machine translation. Proceedings on Empiri- cal Methods in Natural Language Processing. http://aclweb.org/anthology/D15-1166.
Matthew MacMahon, Brian Stankiewics, and Ben- jamin Kuipers. 2006. Walk the talk: Connecting | 1704.08795#54 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 55 | Matthew MacMahon, Brian Stankiewics, and Ben- jamin Kuipers. 2006. Walk the talk: Connecting
language, knowledge, action in route instructions. In Proceedings of the National Conference on Ar- tiï¬cial Intelligence.
Cynthia Matuszek, Dieter Fox, and Karl Koscher. 2010. Following directions using statistical machine translation. In Proceedings of the international con- ference on Human-robot interaction.
Cynthia Matuszek, Evan Herbst, Luke S. Zettlemoyer, and Dieter Fox. 2012. Learning to parse natural lan- guage commands to a robot control system. In Pro- ceedings of the International Symposium on Experi- mental Robotics.
Hongyuan Mei, Mohit Bansal, and R. Matthew Walter. 2016. What to talk about and how? selective gener- ation using lstms with coarse-to-ï¬ne alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. https://doi.org/10.18653/v1/N16-1086. | 1704.08795#55 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 56 | Dipendra K. Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2016. Tell me dave: Context- sensitive grounding of natural language to manip- ulation instructions. The International Journal of Robotics Research 35(1-3):281â300.
Kumar Dipendra Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lexi- In Pro- con induction for high-level instructions. ceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). https://doi.org/10.3115/v1/P15-1096.
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforce- ment learning. In Proceedings of the International Conference on Machine Learning.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. 2013. Playing atari with deep reinforcement learning. In Advances in Neural Information Processing Systems. | 1704.08795#56 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 57 | Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidje- land, and Georg Ostrovski. 2015. Human-level con- trol through deep reinforcement learning. Nature 518(7540).
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text- based games using deep reinforcement learning. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing. https://doi.org/10.18653/v1/D15-1001.
Karthik Narasimhan, Adam Yala, and Regina Barzi- lay. 2016. Improving information extraction by ac- quiring external evidence with reinforcement learn- ing. In Proceedings of the 2016 Conference on Em- pirical Methods in Natural Language Processing. http://aclweb.org/anthology/D16-1261.
Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy invariance under reward transforma- tions: Theory and application to reward shaping. In Proceedings of the International Conference on Ma- chine Learning. | 1704.08795#57 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 58 | Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. 2016. Control of mem- In ory, active perception, and action in minecraft. Proceedings of the International Conference on Ma- chine Learning.
and Thomas M. Howard. 2016. Efï¬cient grounding of abstract spatial concepts for natural language inter- In Robotics: Sci- action with robot manipulators. ence and Systems.
Andrei A. Rusu, Matej Vecerik, Thomas Rothörl, Nico- las Heess, Razvan Pascanu, and Raia Hadsell. 2016. Sim-to-real robot learning from pixels with progres- sive nets. CoRR .
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2015. Trust region policy optimization . | 1704.08795#58 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 59 | John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2015. Trust region policy optimization .
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, Sander Dieleman, Do- minik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Ko- ray Kavukcuoglu, Thore Graepel, and Demis Hass- abis. 2016. Mastering the game of go with deep neu- ral networks and tree search. Nature 529 7587:484â 9.
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of compositional language for vi- sual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics. | 1704.08795#59 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 60 | Jaeyong Sung, Seok Hyun Jin, and Ashutosh Sax- ena. 2015. Robobarista: Object part based trans- fer of manipulation trajectories from crowd-sourcing In International Symposium on in 3d pointclouds. Robotics Research.
Richard S. Sutton and Andrew G. Barto. 1998. Rein- forcement learning: An introduction. IEEE Trans. Neural Networks 9:1054â1054.
Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradi- ent methods for reinforcement learning with func- In Advances in Neural Infor- tion approximation. mation Processing Systems.
Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis G. Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural lan- guage commands for robotic navigation and mobile manipulation. In Proceedings of the National Con- ference on Artiï¬cial Intelligence.
Learn- In ing to follow navigational directions. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P10-1083. | 1704.08795#60 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 61 | Bonnie Webber, Norman Badler, Barbara Di Euge- nio, Christopher Geib, Libby Levison, and Michael Moore. 1995. Instructions, intentions and expecta- tions. Artiï¬cial Intelligence 73(1):253â269.
Eric Wiewiora, Garrison W. Cottrell, and Charles Elkan. 2003. Principled methods for advising re- inforcement learning agents. In Proceedings of the International Conference on Machine Learning.
Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine Learning 8.
Ronald J Williams and Jing Peng. 1991. Function opti- mization using connectionist reinforcement learning algorithms. Connection Science 3(3):241â268.
Terry Winograd. 1972. Understanding natural lan- guage. Cognitive Psychology 3(1):1â191.
Kelvin Xu, Jimmy Ba, Jamie Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation In Proceedings of the Inter- with visual attention. national Conference on Machine Learning. | 1704.08795#61 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 62 | Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. 2017. Target-driven visual navigation in indoor scenes using deep reinforcement learning.
# A Reward Shaping Theorems
In Section 6, we introduce two reward shaping terms. We follow the safe-shaping theorems of Ng et al. (1999) and Wiewiora et al. (2003). The theo- rems outline potential-based terms that realize suf- ï¬cient conditions for safe shaping. Applying safe terms guarantees the order of policies according to the original problem reward does not change. While the theory only applies when optimizing the total reward, we show empirically the effec- tiveness of the safe shaping terms in a contextual bandit setting. For convenience, we provide the deï¬nitions of potential-based shaping terms and the theorems introduced by Ng et al. (1999) and Wiewiora et al. (2003) using our notation. We re- fer the reader to the original papers for the full de- tails and proofs.
The distance-based shaping term F1 is deï¬ned based on the theorem of Ng et al. (1999): | 1704.08795#62 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 63 | The distance-based shaping term F1 is deï¬ned based on the theorem of Ng et al. (1999):
Deï¬nition. A shaping term F : S à A à S â R is potential-based if there exists a function Ï : S â R such that, at time j, F (sj, aj, sj+1) = γÏ(sj+1)âÏ(sj), âsj, sj+1 â S and aj â A, where γ â [0, 1] is a future reward discounting factor. The function Ï is the potential function of the shaping term F .
if the Theorem. Given a reward function R(sj, aj), shaping term is potential-based, the shaped reward RF (sj, aj, sj+1) = R(sj, aj)+F (sj, aj, sj+1) does not modify the total order of policies.
In the deï¬nition of F1, we set the discounting term γ to 1.0 and omit it. | 1704.08795#63 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 64 | In the deï¬nition of F1, we set the discounting term γ to 1.0 and omit it.
The trajectory-based shaping term F2 follows the shaping term introduced by Brys et al. (2015). To deï¬ne it, we use the look-back advice shaping term of Wiewiora et al. (2003), who extended the potential-based term of Ng et al. (1999) for terms that consider the previous state and action:
Deï¬nition. A shaping term F : S à A à S à A â R is potential-based if there exists a function Ï : S à A â R such that, at time j, F (sjâ1, ajâ1, sj, aj) = γÏ(sj, aj) â Ï(sjâ1, ajâ1), âsj, sjâ1 â S and aj, ajâ1 â A, where γ â [0, 1] is a future reward dis- counting factor. The function Ï is the potential function of the shaping term F . | 1704.08795#64 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
1704.08795 | 65 | function R(sj, aj), Theorem. Given the shaping the if shaped reward = R(sj, aj) + F (sjâ1, ajâ1, sj, aj) does not modify the total order of policies.
In the deï¬nition of F2 as well, we set the discount- ing term γ to 1.0 and omit it.
# B Evaluation Systems
We implement multiple systems for evaluation. STOP The agent performs the STOP action im- mediately at the beginning of execution.
RANDOM The agent samples actions uniformly until STOP is sampled or J actions were sampled, where J is the execution horizon. SUPERVISED Given the training data with N instruction-state-execution triplets, we generate training data of instruction-state-action triplets and optimize the log-likelihood of the data. Formally, we optimize the objective:
N mi 1 =) (é J= ES tog n(?, 09") . i=l j=l | 1704.08795#65 | Mapping Instructions and Visual Observations to Actions with Reinforcement Learning | We propose to directly map raw visual observations and text input to actions
for instruction execution. While existing approaches assume access to
structured environment representations or use a pipeline of separately trained
models, we learn a single model to jointly reason about linguistic and visual
input. We use reinforcement learning in a contextual bandit setting to train a
neural network agent. To guide the agent's exploration, we use reward shaping
with different forms of supervision. Our approach does not require intermediate
representations, planning procedures, or training different models. We evaluate
in a simulated environment, and show significant improvements over supervised
learning and common reinforcement learning variants. | http://arxiv.org/pdf/1704.08795 | Dipendra Misra, John Langford, Yoav Artzi | cs.CL | In Proceedings of the Conference on Empirical Methods in Natural
Language Processing (EMNLP), 2017 | null | cs.CL | 20170428 | 20170722 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.