doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1509.00685 | 10 | enci(x,yc) = p'x, p = [1/M,...,1/M], x = [Fx),..., Fxaz]Where the input-side embedding matrix F â RHÃV is the only new parameter of the encoder and p â [0, 1]M is a uniform distribution over the input words.
4Each of the weight matrices U, V, W also has a cor- responding bias term. For readability, we omit these terms throughout the paper.
For summarization this model can capture the relative importance of words to distinguish con- tent words from stop words or embellishments. Potentially the model can also learn to combine words; although it is inherently limited in repre- senting contiguous phrases.
Convolutional Encoder To address some of the modelling issues with bag-of-words we also con- sider using a deep convolutional encoder for the input sentence. This architecture improves on the bag-of-words model by allowing local interactions between words while also not requiring the con- text yc while encoding the input.
We utilize a standard time-delay neural network (TDNN) architecture, alternating between tempo- ral convolution layers and max pooling layers. | 1509.00685#10 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 12 | Where F is a word embedding matrix and QLÃHÃ2Q+1 consists of a set of ï¬lters for each layer {1, . . . L}. Eq. 7 is a temporal (1D) convolu- tion layer, Eq. 6 consists of a 2-element temporal max pooling layer and a pointwise non-linearity, and ï¬nal output Eq. 5 is a max over time. At each layer Ëx is one half the size of ¯x. For simplicity we assume that the convolution is padded at the boundaries, and that M is greater than 2L so that the dimensions are well-deï¬ned.
Attention-Based Encoder While the convolu- tional encoder has richer capacity than bag-of- words, it still is required to produce a single rep- resentation for the entire input sentence. A simi- lar issue in machine translation inspired Bahdanau et al. (2014) to instead utilize an attention-based contextual encoder that constructs a representation based on the generation context. Here we note that if we exploit this context, we can actually use a rather simple model similar to bag-of-words: | 1509.00685#12 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 13 | enc3(X, yc) = p'x, p x exp(xP¥), xX = [Fx1,...,F xa], ¥ = [Gyi-c+,.-.,Gyi), i+Q vi x = > %i/Q. q=i-Q
Where G â RDÃV is an embedding of the con- text, P â RHÃ(CD) is a new weight matrix pa- rameter mapping between the context embedding and input embedding, and Q is a smoothing win- dow. The full model is shown in Figure 3b.
Informally we can think of this model as simply replacing the uniform distribution in bag-of-words with a learned soft alignment, P, between the in- put and the summary. Figure 1 shows an exam- ple of this distribution p as a summary is gener- ated. The soft alignment is then used to weight the smoothed version of the input ¯x when con- structing the representation. For instance if the current context aligns well with position i then the words xiâQ, . . . , xi+Q are highly weighted by the encoder. Together with the NNLM, this model can be seen as a stripped-down version of the attention-based neural machine translation model.5
# 3.3 Training | 1509.00685#13 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 14 | # 3.3 Training
The lack of generation constraints makes it pos- sible to train the model on arbitrary input-output pairs. Once we have deï¬ned the local condi- tional model, p(yi+1|x, yc; θ), we can estimate the parameters to minimize the negative log- likelihood of a set of summaries. Deï¬ne this train- ing set as consisting of J input-summary pairs (x(1), y(1)), . . . , (x(J), y(J)). The negative log- likelihood conveniently factors6 into a term for each token in the summary:
J NLL(@) = = Plog nly |x; 6), j=l J N-1 = OS neh. ¥50 j=l i=l
We minimize NLL by using mini-batch stochastic gradient descent. The details are described further in Section 7.
5To be explicit, compared to Bahdanau et al. (2014) our model uses an NNLM instead of a target-side LSTM, source-side windowed averaging instead of a source-side bi- directional RNN, and a weighted dot-product for alignment instead of an alignment MLP.
6This is dependent on using the gold standard contexts yc. An alternative is to use the predicted context within a structured or reenforcement-learning style objective. | 1509.00685#14 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 15 | 6This is dependent on using the gold standard contexts yc. An alternative is to use the predicted context within a structured or reenforcement-learning style objective.
# 4 Generating Summaries
We now return to the problem of generating sum- maries. Recall from Eq. 4 that our goal is to ï¬nd,
N-1 y= argmax )~ g(Â¥i+1,,Ye)- YOY 5=0
Unlike phrase-based machine translation where inference is NP-hard, it actually is tractable in the- ory to compute yâ. Since there is no explicit hard alignment constraint, Viterbi decoding can be ap- plied and requires O(N V C) time to ï¬nd an exact solution. In practice though V is large enough to make this difï¬cult. An alternative approach is to approximate the arg max with a strictly greedy or deterministic decoder.
A compromise between exact and greedy de- coding is to use a beam-search decoder (Algo- rithm 1) which maintains the full vocabulary V while limiting itself to K potential hypotheses at each position of the summary. This has been the standard approach for neural MT models (Bah- danau et al., 2014; Sutskever et al., 2014; Luong et al., 2015). The beam-search algorithm is shown here, modiï¬ed for the feed-forward model: | 1509.00685#15 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 16 | Algorithm 1 Beam Search Input: Parameters 6, beam size K, input x Output: Approx. AK-best summaries 7 [0] + {e} S = Vif abstractive else {x: | Vi} fori =0to Nâ1do > Generate Hypotheses N« {ly yin] | y ⬠afi), yin1 ⬠S} > Hypothesis Recombination yeN | â s(y,x) > s(yâ,x) He { Vyâ EN s.t. ye =ye > Filter K-Max li + 1] K-arg max glyis1,Â¥e,x) + 5(Â¥;X) yEH end for return 7[N]
As with Viterbi this beam search algorithm is much simpler than beam search for phrase-based MT. Because there is no explicit constraint that each source word be used exactly once there is no need to maintain a bit set and we can sim- ply move from left-to-right generating words. The beam search algorithm requires O(KN V ) time. From a computational perspective though, each round of beam search is dominated by computing p(yi|x, yc) for each of the K hypotheses. These can be computed as a mini-batch, which in prac- tice greatly reduces the factor of K.
# 5 Extension: Extractive Tuning | 1509.00685#16 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 17 | # 5 Extension: Extractive Tuning
While we will see that the attention-based model is effective at generating summaries, it does miss an important aspect seen in the human-generated In particular the abstractive model references. does not have the capacity to ï¬nd extractive word matches when necessary, for example transferring unseen proper noun phrases from the input. Simi- lar issues have also been observed in neural trans- lation models particularly in terms of translating rare words (Luong et al., 2015).
To address this issue we experiment with tuning a very small set of additional features that trade- off the abstractive/extractive tendency of the sys- tem. We do this by modifying our scoring function to directly estimate the probability of a summary using a log-linear model, as is standard in machine translation:
L T x exp(a f(Yit1, Ye): D(y|x; 9, a) ll So a
Where α â R5 is a weight vector and f is a fea- ture function. Finding the best summary under this distribution corresponds to maximizing a factored scoring function s,
N- Yo a! f(vin1s ¥, Ye): i=0 ny s(y,x) = | 1509.00685#17 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 18 | N- Yo a! f(vin1s ¥, Ye): i=0 ny s(y,x) =
where g(yi41,X,Â¥e) = a! f(yix1,X, Yc) to sat- isfy Eq. 4. The function f is defined to combine the local conditional probability with some addi- tional indicator featrues:
1{âj. yi+1 = xj }, 1{âj. yi+1âk = xjâk âk â {0, 1}}, 1{âj. yi+1âk = xjâk âk â {0, 1, 2}}, 1{âk > j. yi = xk, yi+1 = xj} ].
These features correspond to indicators of uni- gram, bigram, and trigram match with the input as well as reordering of input words. Note that set- ing a = (1,0,...,0) gives a model identical to standard ABS. | 1509.00685#18 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 19 | After training the main neural model, we ï¬x θ and tune the α parameters. We follow the statis- tical machine translation setup and use minimum- error rate training (MERT) to tune for the summa- rization metric on tuning data (Och, 2003). This tuning step is also identical to the one used for the phrase-based machine translation baseline.
# 6 Related Work
Abstractive sentence summarization has been tra- ditionally connected to the task of headline gener- ation. Our work is similar to early work of Banko et al. (2000) who developed a statistical machine translation-inspired approach for this task using a corpus of headline-article pairs. We extend this (1) using a neural summarization approach by: model as opposed to a count-based noisy-channel model, (2) training the model on much larger scale (25K compared to 4 million articles), (3) and al- lowing fully abstractive decoding.
This task was standardized around the DUC- 2003 and DUC-2004 competitions (Over et al., 2007). The TOPIARY system (Zajic et al., 2004) performed the best in this task, and is described in detail in the next section. We point interested read- ers to the DUC web page (http://duc.nist. gov/) for the full list of systems entered in this shared task. | 1509.00685#19 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 20 | More recently, Cohn and Lapata (2008) give a compression method which allows for more ar- bitrary transformations. They extract tree trans- duction rules from aligned, parsed texts and learn weights on transfomations using a max-margin learning algorithm. Woodsend et al. (2010) pro- pose a quasi-synchronous grammar approach uti- lizing both context-free parses and dependency parses to produce legible summaries. Both of these approaches differ from ours in that they di- rectly use the syntax of the input/output sentences. The latter system is W&L in our results; we at- tempted to train the former system T3 on this dataset but could not train it at scale.
In addition to Banko et al. (2000) there has been some work using statistical machine translation directly for abstractive summary. Wubben et al. (2012) utilize MOSES directly as a method for text simpliï¬cation.
Recently Filippova and Altun (2013) developed a strictly extractive system that is trained on a rel- atively large corpora (250K sentences) of article- title pairs. Because their focus is extractive com- pression, the sentences are transformed by a series of heuristics such that the words are in monotonic alignment. Our system does not require this align- ment step but instead uses the text directly. | 1509.00685#20 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 21 | Neural MT This work is closely related to re- cent work on neural network language models (NNLM) and to work on neural machine translation. The core of our model is a NNLM based on that of Bengio et al. (2003).
Recently, there have been several papers about models for machine translation (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). Of these our model is most closely related to the attention-based model of Bahdanau et al. (2014), which explicitly ï¬nds a soft alignment be- tween the current position and the input source. Most of these models utilize recurrent neural net- works (RNNs) for generation as opposed to feed- forward models. We hope to incorporate an RNN- LM in future work.
# 7 Experimental Setup
We experiment with our attention-based sentence summarization model on the task of headline gen- eration. In this section we describe the corpora used for this task, the baseline methods we com- pare with, and implementation details of our ap- proach.
# 7.1 Data Set | 1509.00685#21 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 22 | # 7.1 Data Set
The standard sentence summarization evaluation set is associated with the DUC-2003 and DUC- 2004 shared tasks (Over et al., 2007). The data for this task consists of 500 news arti- cles from the New York Times and Associated Press Wire services each paired with 4 different human-generated reference summaries (not actu- ally headlines), capped at 75 bytes. This data set is evaluation-only, although the similarly sized DUC-2003 data set was made available for the task. The expectation is for a summary of roughly 14 words, based on the text of a complete arti- cle (although we only make use of the ï¬rst sen- tence). The full data set is available by request at http://duc.nist.gov/data.html. | 1509.00685#22 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 23 | For this shared task, systems were entered and evaluated using several variants of the recall- oriented ROUGE metric (Lin, 2004). To make recall-only evaluation unbiased to length, out- put of all systems is cut-off after 75-characters and no bonus is given for shorter summaries. Unlike BLEU which interpolates various n-gram matches, there are several versions of ROUGE for different match lengths. The DUC evaluation uses ROUGE-1 (unigrams), ROUGE-2 (bigrams), and ROUGE-L (longest-common substring), all of which we report.
In addition to the standard DUC-2014 evaluation, we also report evaluation on single refer- ence headline-generation using a randomly held- out subset of Gigaword. This evaluation is closer to the task the model is trained for, and it allows us to use a bigger evaluation set, which we will in- clude in our code release. For this evaluation, we tune systems to generate output of the average title length. | 1509.00685#23 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 24 | For training data for both tasks, we utilize the annotated Gigaword data set (Graff et al., 2003; Napoles et al., 2012), which consists of standard Gigaword, preprocessed with Stanford CoreNLP tools (Manning et al., 2014). Our model only uses annotations for tokenization and sentence separa- tion, although several of the baselines use parsing and tagging as well. Gigaword contains around 9.5 million news articles sourced from various domes- tic and international news services over the last two decades. | 1509.00685#24 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 25 | For our training set, we pair the headline of each article with its ï¬rst sentence to create an input- summary pair. While the model could in theory be trained on any pair, Gigaword contains many spu- rious headline-article pairs. We therefore prune training based on the following heuristic ï¬lters: (1) Are there no non-stop-words in common? (2) Does the title contain a byline or other extrane- ous editing marks? (3) Does the title have a ques- tion mark or colon? After applying these ï¬lters, the training set consists of roughly J = 4 million title-article pairs. We apply a minimal preprocess- ing step using PTB tokenization, lower-casing, re- placing all digit characters with #, and replacing of word types seen less than 5 times with UNK. We also remove all articles from the time-period of the DUC evaluation. release. | 1509.00685#25 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 26 | The complete input training vocabulary consists of 119 million word tokens and 110K unique word types with an average sentence size of 31.3 words. The headline vocabulary consists of 31 million to- kens and 69K word types with the average title of length 8.3 words (note that this is signiï¬cantly shorter than the DUC summaries). On average there are 4.6 overlapping word types between the headline and the input; although only 2.6 in the ï¬rst 75-characters of the input.
# 7.2 Baselines
Due to the variety of approaches to the sentence summarization problem, we report a broad set of headline-generation baselines.
From the DUC-2004 task we include the PRE- FIX baseline that simply returns the ï¬rst 75- characters of the input as the headline. We also report the winning system on this shared task, TOPIARY (Zajic et al., 2004). TOPIARY merges a compression system using linguistically- motivated transformations of the input (Dorr et al., 2003) with an unsupervised topic detection (UTD) algorithm that appends key phrases from the full article onto the compressed output. Woodsend et al. (2010) (described above) also report results on the DUC dataset. | 1509.00685#26 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 27 | The DUC task also includes a set of manual summaries performed by 8 human summarizers each summarizing half of the test data sentences (yielding 4 references per sentence). We report the average inter-annotater agreement score as REF- ERENCE. For reference, the best human evaluator scores 31.7 ROUGE-1.
We also include several baselines that have ac- cess to the same training data as our system. The ï¬rst is a sentence compression baseline COM- PRESS (Clarke and Lapata, 2008). This model uses the syntactic structure of the original sentence along with a language model trained on the head- line data to produce a compressed output. The syntax and language model are combined with a set of linguistic constraints and decoding is per- formed with an ILP solver.
To control for memorizing titles from training, we implement an information retrieval baseline, IR. This baseline indexes the training set, and gives the title for the article with highest BM-25 match to the input (see Manning et al. (2008)). | 1509.00685#27 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 28 | Finally, we use a phrase-based statistical ma- chine translation system trained on Gigaword to produce summaries, MOSES+ (Koehn et al., 2007). To improve the baseline for this task, we augment the phrase table with âdeletionâ rules mapping each article word to â¬, include an addi- tional deletion feature for these rules, and allow for an infinite distortion limit. We also explic- itly tune the model using MERT to target the 75- byte capped ROUGE score as opposed to standard BLEU-based tuning. Unfortunately, one remain- ing issue is that it is non-trivial to modify the trans- lation decoder to produce fixed-length outputs, so we tune the system to produce roughly the ex- pected length. | 1509.00685#28 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 29 | Model ROUGE-1 DUC-2004 ROUGE-2 ROUGE-L Gigaword ROUGE-1 ROUGE-2 ROUGE-L Ext. % IR PREFIX COMPRESS W&L TOPIARY MOSES+ ABS ABS+ 11.06 22.43 19.77 22 25.12 26.50 26.55 28.18 1.67 6.49 4.02 6 6.46 8.13 7.06 8.49 9.67 19.65 17.30 17 20.12 22.85 22.05 23.81 16.91 23.14 19.63 - - 28.77 30.88 31.00 5.55 8.25 5.13 - - 12.10 12.22 12.65 15.58 21.73 18.28 - - 26.44 27.77 28.34 29.2 100 100 - - 70.5 85.4 91.5 REFERENCE 29.21 8.38 24.46 - - - 45.6
Table 1: Experimental results on the main summary tasks on various ROUGE metrics . Baseline models are described in detail in Section 7.2. We report the percentage of tokens in the summary that also appear in the input for Gigaword as Ext %.
# 7.3 Implementation | 1509.00685#29 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 30 | # 7.3 Implementation
For training, we use mini-batch stochastic gradient descent to minimize negative log-likelihood. We use a learning rate of 0.05, and split the learning rate by half if validation log-likelihood does not improve for an epoch. Training is performed with shufï¬ed mini-batches of size 64. The minibatches are grouped by input length. After each epoch, we renormalize the embedding tables (Hinton et al., 2012). Based on the validation set, we set hyper- parameters as D = 200, H = 400, C = 5, L = 3, and Q = 2.
Our implementation uses the Torch numerical framework (http://torch.ch/) and will be openly available along with the data pipeline. Cru- cially, training is performed on GPUs and would be intractable or require approximations other- wise. Processing 1000 mini-batches with D = 200, H = 400 requires 160 seconds. Best valida- tion accuracy is reached after 15 epochs through the data, which requires around 4 days of training. Additionally, as described in Section 5 we apply a MERT tuning step after training using the DUC- 2003 data. For this step we use Z-MERT (Zaidan, 2009). We refer to the main model as ABS and the tuned model as ABS+. | 1509.00685#30 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 31 | prisingly well on ROUGE-1 which makes sense given the earlier observed overlap between article and summary. Both ABS
and MOSES+ perform better than TOPIARY, particularly on ROUGE-2 and ROUGE-L in DUC. The full model ABS+ scores the best on these tasks, and is signiï¬cantly better based on the default ROUGE conï¬dence level than TOPIARY on all metrics, and MOSES+ on ROUGE-1 for DUC as well as ROUGE-1 and ROUGE-L for Gigaword. Note that the additional extractive features bias the system towards re- taining more input words, which is useful for the underlying metric.
Next we consider ablations to the model and al- gorithm structure. Table 2 shows experiments for the model with various encoders. For these exper- iments we look at the perplexity of the system as a language model on validation data, which con- trols for the variable of inference and tuning. The NNLM language model with no encoder gives a gain over the standard n-gram language model. Including even the bag-of-words encoder reduces perplexity number to below 50. Both the convo- lutional encoder and the attention-based encoder further reduce the perplexity, with attention giving a value below 30.
# 8 Results | 1509.00685#31 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 32 | # 8 Results
Our main results are presented in Table 1. We run experiments both using the DUC-2004 eval- uation data set (500 sentences, 4 references, 75 bytes) with all systems and a randomly held-out Gigaword test set (2000 sentences, 1 reference). We ï¬rst note that the baselines COMPRESS and IR do relatively poorly on both datasets, indicating that neither just having article information or lan- guage model information alone is sufï¬cient for the task. The PREFIX baseline actually performs surWe also consider model and decoding ablations on the main summary model, shown in Table 3. These experiments compare to the BoW encoding models, compare beam search and greedy decod- ing, as well as restricting the system to be com- plete extractive. Of these features, the biggest im- pact is from using a more powerful encoder (atten- tion versus BoW), as well as using beam search to generate summaries. The abstractive nature of the system helps, but for ROUGE even using pure ex- tractive generation is effective.
Model Encoder Perplexity KN-Smoothed 5-Gram Feed-Forward NNLM Bag-of-Word Convolutional (TDNN) Attention-Based (ABS) none none enc1 enc2 enc3 183.2 145.9 43.6 35.9 27.1 | 1509.00685#32 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 33 | Table 2: Perplexity results on the Gigaword validation set comparing various language models with C=5 and end- to-end summarization models. The encoders are deï¬ned in Section 3.
Decoder Model Cons. R-1 R-2 R-L Greedy Beam Beam Beam ABS+ Abs BOW Abs Ext ABS+ Abs ABS+ 26.67 22.15 27.89 28.48 6.72 4.60 7.56 8.91 21.70 18.23 22.84 23.97
Table 3: ROUGE scores on DUC-2003 development data for various versions of inference. Greedy and Beam are de- scribed in Section 4. Ext. is a purely extractive version of the system (Eq. 2)
Finally we consider example summaries shown in Figure 4. Despite improving on the base- line scores, this model is far from human per- formance on this task. Generally the models are good at picking out key words from the input, such as names and places. However, both models will reorder words in syntactically incorrect ways, for instance in Sentence 7 both models have the wrong subject. ABS often uses more interesting re-wording, for instance new nz pm after election in Sentence 4, but this can also lead to attachment mistakes such a russian oil giant chevron in Sen- tence 11.
# 9 Conclusion | 1509.00685#33 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 34 | # 9 Conclusion
We have presented a neural attention-based model for abstractive summarization, based on recent de- velopments in neural machine translation. We combine this probabilistic model with a genera- tion algorithm which produces accurate abstrac- tive summaries. As a next step we would like to further improve the grammaticality of the sum- maries in a data-driven way, as well as scale this system to generate paragraph-level summaries. Both pose additional challenges in terms of efï¬- cient alignment and consistency in generation.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua 2014. Neural machine translation by CoRR, Bengio. jointly learning to align and translate. abs/1409.0473.
I(1): a detained iranian-american academic accused of acting against national security has been released from a tehran prison after a hefty bail was posted , a to p judiciary ofï¬cial said tuesday . G: iranian-american academic held in tehran released on bail A: detained iranian-american academic released from jail after posting bail A+: detained iranian-american academic released from prison after hefty bail | 1509.00685#34 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 35 | I(2): ministers from the european union and its mediterranean neighbors gathered here under heavy security on monday for an unprecedented conference on economic and political cooperation . G: european mediterranean ministers gather for landmark conference by julie bradford A: mediterranean neighbors gather for unprecedented conference on heavy security A+: mediterranean neighbors gather under heavy security for unprece- dented conference
I(3): the death toll from a school collapse in a haitian shanty-town rose to ## after rescue workers uncovered a classroom with ## dead students and their teacher , ofï¬cials said saturday . G: toll rises to ## in haiti school unk : ofï¬cial A: death toll in haiti school accident rises to ## A+: death toll in haiti school to ## dead students
I(4): australian foreign minister stephen smith sunday congratulated new zealand âs new prime minister-elect john key as he praised ousted leader helen clark as a â gutsy â and respected politician . G: time caught up with nz âs gutsy clark says australian fm A: australian foreign minister congratulates new nz pm after election A+: australian foreign minister congratulates smith new zealand as leader | 1509.00685#35 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 36 | I(5): two drunken south african fans hurled racist abuse at the country âs rugby sevens coach after the team were eliminated from the weekend âs hong kong tournament , reports said tuesday . G: rugby union : racist taunts mar hong kong sevens : report A: south african fans hurl racist taunts at rugby sevens A+: south african fans racist abuse at rugby sevens tournament
I(6): christian conservatives â kingmakers in the last two us presidential elections â may have less success in getting their pick elected in #### , political observers say . G: christian conservatives power diminished ahead of #### vote A: christian conservatives may have less success in #### election A+: christian conservatives in the last two us presidential elections
I(7): the white house on thursday warned iran of possible new sanctions after the un nuclear watchdog reported that tehran had begun sensitive nuclear work at a key site in deï¬ance of un resolutions . G: us warns iran of step backward on nuclear issue A: iran warns of possible new sanctions on nuclear work A+: un nuclear watchdog warns iran of possible new sanctions | 1509.00685#36 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 37 | I(8): thousands of kashmiris chanting pro-pakistan slogans on sunday attended a rally to welcome back a hardline separatist leader who underwent cancer treatment in mumbai . G: thousands attend rally for kashmir hardliner A: thousands rally in support of hardline kashmiri separatist leader A+: thousands of kashmiris rally to welcome back cancer treatment
I(9): an explosion in iraq âs restive northeastern province of diyala killed two us soldiers and wounded two more , the military reported monday . G: two us soldiers killed in iraq blast december toll ### A: # us two soldiers killed in restive northeast province A+: explosion in restive northeastern province kills two us soldiers
I(10): russian world no. # nikolay davydenko became the ï¬fth with- drawal through injury or illness at the sydney international wednesday , retiring from his second round match with a foot injury . G: tennis : davydenko pulls out of sydney with injury A: davydenko pulls out of sydney international with foot injury A+: russian world no. # davydenko retires at sydney international | 1509.00685#37 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 38 | I(11): russia âs gas and oil giant gazprom and us oil major chevron have set up a joint venture based in resource-rich northwestern siberia , the interfax news agency reported thursday quoting gazprom ofï¬cials . G: gazprom chevron set up joint venture A: russian oil giant chevron set up siberia joint venture A+: russia âs gazprom set up joint venture in siberia
Figure 4: Example sentence summaries produced on Gi- gaword. I is the input, A is ABS, and G is the true headline.
Michele Banko, Vibhu O Mittal, and Michael J Wit- brock. 2000. Headline generation based on statis- tical translation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguis- tics, pages 318â325. Association for Computational Linguistics.
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search, 3:1137â1155. | 1509.00685#38 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 39 | Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724â1734.
James Clarke and Mirella Lapata. 2008. Global in- ference for sentence compression: An integer linear programming approach. Journal of Artiï¬cial Intelli- gence Research, pages 399â429.
2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1, pages 137â144. Asso- ciation for Computational Linguistics.
Hal Daum´e III and Daniel Marcu. 2002. A noisy- channel model for document compression. In Pro- ceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 449â456. As- sociation for Computational Linguistics. | 1509.00685#39 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 40 | Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLT- NAACL 03 on Text summarization workshop-Volume 5, pages 1â8. Association for Computational Lin- guistics.
Katja Filippova and Yasemin Altun. 2013. Overcom- ing the lack of parallel data in sentence compression. In EMNLP, pages 1481â1491.
David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia.
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- Improving neural networks by dinov. preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.
Hongyan Jing. 2002. Using hidden markov modeling to decompose human-written summaries. Computa- tional linguistics, 28(4):527â543.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent In EMNLP, pages continuous translation models. 1700â1709. | 1509.00685#40 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 41 | Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent In EMNLP, pages continuous translation models. 1700â1709.
Kevin Knight and Daniel Marcu. 2002. Summariza- tion beyond sentence extraction: A probabilistic ap- proach to sentence compression. Artiï¬cial Intelli- gence, 139(1):91â107.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source In Pro- toolkit for statistical machine translation. ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177â180. Association for Computational Linguis- tics.
Chin-Yew Lin. 2004. Rouge: A package for automatic In Text Summarization evaluation of summaries. Branches Out: Proceedings of the ACL-04 Work- shop, pages 74â81. | 1509.00685#41 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 42 | Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Address- ing the rare word problem in neural machine trans- In Proceedings of the 53rd Annual Meet- lation. ing of the Association for Computational Linguis- tics, pages 11â19.
Christopher D Manning, Prabhakar Raghavan, and Introduction to informa- Hinrich Sch¨utze. 2008. tion retrieval, volume 1. Cambridge university press Cambridge.
Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- In Proceedings of 52nd guage processing toolkit. Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55â60.
Courtney Napoles, Matthew Gormley, and Benjamin In Pro- Van Durme. 2012. Annotated gigaword. ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction, pages 95â100. Association for Compu- tational Linguistics. | 1509.00685#42 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1509.00685 | 43 | Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics-Volume 1, pages 160â167. As- sociation for Computational Linguistics.
Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing & Management, 43(6):1506â1520.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104â3112.
Kristian Woodsend, Yansong Feng, and Mirella Lapata. 2010. Generation with quasi-synchronous grammar. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 513â 523. Association for Computational Linguistics.
Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simpliï¬cation by mono- lingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 1015â1024. Association for Computational Linguis- tics. | 1509.00685#43 | A Neural Attention Model for Abstractive Sentence Summarization | Summarization based on text extraction is inherently limited, but
generation-style abstractive methods have proven challenging to build. In this
work, we propose a fully data-driven approach to abstractive sentence
summarization. Our method utilizes a local attention-based model that generates
each word of the summary conditioned on the input sentence. While the model is
structurally simple, it can easily be trained end-to-end and scales to a large
amount of training data. The model shows significant performance gains on the
DUC-2004 shared task compared with several strong baselines. | http://arxiv.org/pdf/1509.00685 | Alexander M. Rush, Sumit Chopra, Jason Weston | cs.CL, cs.AI | Proceedings of EMNLP 2015 | null | cs.CL | 20150902 | 20150903 | [] |
1508.07909 | 1 | # Abstract
Neural machine translation (NMT) mod- els typically operate with a ï¬xed vocabu- lary, but translation is an open-vocabulary Previous work addresses the problem. translation of out-of-vocabulary words by backing off to a dictionary. In this pa- per, we introduce a simpler and more ef- fective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as se- quences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via com- positional translation), and cognates and loanwords (via phonological and morpho- logical transformations). We discuss the suitability of different word segmentation techniques, including simple character n- gram models and a segmentation based on the byte pair encoding compression algo- rithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks EnglishâGerman and EnglishâRussian by up to 1.1 and 1.3 BLEU, respectively.
# 1 Introduction | 1508.07909#1 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 2 | # 1 Introduction
Neural machine translation has recently shown impressive results (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). However, the translation of rare words is an open problem. The vocabulary of neu- ral models is typically limited to 30 000â50 000 words, but translation is an open-vocabulary problem, and especially for languages with produc- tive word formation processes such as aggluti- nation and compounding, translation models re- quire mechanisms that go below the word level. As an example, consider compounds such as the German Abwasser|behandlungs|anlange âsewage water treatment plantâ, for which a segmented, variable-length representation is intuitively more appealing than encoding the word as a ï¬xed-length vector. | 1508.07909#2 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 3 | the translation of out-of-vocabulary words has been addressed through a back-off to a dictionary look-up (Jean et al., 2015; Luong et al., 2015b). We note that such techniques make assumptions that often do not hold true in practice. For instance, there is not al- ways a 1-to-1 correspondence between source and target words because of variance in the degree of morphological synthesis between languages, like in our introductory compounding example. Also, word-level models are unable to translate or gen- erate unseen words. Copying unknown words into the target text, as done by (Jean et al., 2015; Luong et al., 2015b), is a reasonable strategy for names, but morphological changes and transliteration is often required, especially if alphabets differ.
We investigate NMT models that operate on the level of subword units. Our main goal is to model open-vocabulary translation in the NMT network itself, without requiring a back-off model for rare words. In addition to making the translation pro- cess simpler, we also ï¬nd that the subword models achieve better accuracy for the translation of rare words than large-vocabulary models and back-off dictionaries, and are able to productively generate new words that were not seen at training time. Our analysis shows that the neural networks are able to learn compounding and transliteration from sub- word representations. | 1508.07909#3 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 4 | The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland.
This paper has two main contributions:
⢠We show that open-vocabulary neural machine translation is possible by encoding (rare) words via subword units. We ï¬nd our architecture simpler and more effective than using large vocabularies and back-off dictio- naries (Jean et al., 2015; Luong et al., 2015b).
⢠We adapt byte pair encoding (BPE) (Gage, 1994), a compression algorithm, to the task of word segmentation. BPE allows for the representation of an open vocabulary through a ï¬xed-size vocabulary of variable-length character sequences, making it a very suit- able word segmentation strategy for neural network models.
# 2 Neural Machine Translation
We follow the neural machine translation archi- tecture by Bahdanau et al. (2015), which we will brieï¬y summarize here. However, we note that our approach is not speciï¬c to this architecture.
The neural machine translation system is imple- mented as an encoder-decoder network with recur- rent neural networks. | 1508.07909#4 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 5 | The neural machine translation system is imple- mented as an encoder-decoder network with recur- rent neural networks.
The encoder is a bidirectional neural network with gated recurrent units (Cho et al., 2014) that reads an input sequence x = (x1, ..., xm) and calculates a forward sequence of hidden ââ h m), and a backward sequence states ( ââ ââ h j are h 1, ..., ( concatenated to obtain the annotation vector hj.
The decoder is a recurrent neural network that predicts a target sequence y = (y1, ..., yn). Each word yi is predicted based on a recurrent hidden state si, the previously predicted word yiâ1, and a context vector ci. ci is computed as a weighted sum of the annotations hj. The weight of each annotation hj is computed through an alignment model αij, which models the probability that yi is aligned to xj. The alignment model is a single- layer feedforward neural network that is learned jointly with the rest of the network through back- propagation.
A detailed description can be found in (Bah- danau et al., 2015). Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed.
# 3 Subword Translation
The main motivation behind this paper is that the translation of some words is transparent in | 1508.07909#5 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 6 | # 3 Subword Translation
The main motivation behind this paper is that the translation of some words is transparent in
that they are translatable by a competent transla- tor even if they are novel to him or her, based on a translation of known subword units such as morphemes or phonemes. Word categories whose translation is potentially transparent include:
⢠named entities. Between languages that share an alphabet, names can often be copied from source to target text. Transcription or translit- eration may be required, especially if the al- phabets or syllabaries differ. Example: Barack Obama (English; German) ÐаÑак Ðбама (Russian) ãã©ã¯ã»ãªãã (ba-ra-ku o-ba-ma) (Japanese)
⢠cognates and loanwords. Cognates and loan- words with a common origin can differ in regular ways between languages, so that character-level translation rules are sufï¬cient (Tiedemann, 2012). Example: claustrophobia (English) Klaustrophobie (German) ÐлаÑÑÑÑоÑÐ¾Ð±Ð¸Ñ (Klaustrofobiâ) (Russian) | 1508.07909#6 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 7 | ⢠morphologically complex words. Words con- for instance taining multiple morphemes, formed via compounding, afï¬xation, or in- ï¬ection, may be translatable by translating the morphemes separately. Example: solar system (English) Sonnensystem (Sonne + System) (German) Naprendszer (Nap + Rendszer) (Hungarian)
In an analysis of 100 rare tokens (not among the 50 000 most frequent types) in our German training data1, the majority of tokens are poten- tially translatable from English through smaller units. We ï¬nd 56 compounds, 21 names, 6 loanwords with a common origin (emanci- pateâemanzipieren), 5 cases of transparent afï¬x- ation (sweetish âsweetâ + â-ishâ â süÃlich âsüÃâ + â-lichâ), 1 number and 1 computer language iden- tiï¬er. | 1508.07909#7 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 8 | Our hypothesis is that a segmentation of rare words into appropriate subword units is sufï¬- cient to allow for the neural translation network to learn transparent translations, and to general- ize this knowledge to translate and produce unseen words.2 We provide empirical support for this hy1Primarily parliamentary proceedings and web crawl data. 2Not every segmentation we produce is transparent. While we expect no performance beneï¬t from opaque seg- mentations, i.e. segmentations where the units cannot be translated independently, our NMT models show robustness towards oversplitting.
pothesis in Sections 4 and 5. First, we discuss dif- ferent subword representations.
# 3.1 Related Work
For Statistical Machine Translation (SMT), the translation of unknown words has been the subject of intensive research.
A large proportion of unknown words are names, which can just be copied into the tar- get text if both languages share an alphabet. If alphabets differ, transliteration is required (Dur- rani et al., 2014). Character-based translation has also been investigated with phrase-based models, which proved especially successful for closely re- lated languages (Vilar et al., 2007; Tiedemann, 2009; Neubig et al., 2012). | 1508.07909#8 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 9 | The segmentation of morphologically complex words such as compounds is widely used for SMT, and various algorithms for morpheme segmen- tation have been investigated (NieÃen and Ney, 2000; Koehn and Knight, 2003; Virpioja et al., 2007; Stallard et al., 2012). Segmentation al- gorithms commonly used for phrase-based SMT tend to be conservative in their splitting decisions, whereas we aim for an aggressive segmentation that allows for open-vocabulary translation with a compact network vocabulary, and without having to resort to back-off dictionaries.
The best choice of subword units may be task- speciï¬c. For speech recognition, phone-level lan- guage models have been used (Bazzi and Glass, 2000). Mikolov et al. (2012) investigate subword language models, and propose to use syllables. For multilingual segmentation tasks, multilingual algorithms have been proposed (Snyder and Barzi- lay, 2008). We ï¬nd these intriguing, but inapplica- ble at test time. | 1508.07909#9 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 10 | Various techniques have been proposed to pro- duce ï¬xed-length continuous word vectors based on characters or morphemes (Luong et al., 2013; Botha and Blunsom, 2014; Ling et al., 2015a; Kim et al., 2015). An effort to apply such techniques to NMT, parallel to ours, has found no signiï¬cant improvement over word-based approaches (Ling et al., 2015b). One technical difference from our work is that the attention mechanism still oper- ates on the level of words in the model by Ling et al. (2015b), and that the representation of each word is ï¬xed-length. We expect that the attention mechanism beneï¬ts from our variable-length rep- resentation: the network can learn to place attention on different subword units at each step. Re- call our introductory example Abwasserbehand- lungsanlange, for which a subword segmentation avoids the information bottleneck of a ï¬xed-length representation. | 1508.07909#10 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 11 | Neural machine translation differs from phrase- based methods in that there are strong incentives to minimize the vocabulary size of neural models to increase time and space efï¬ciency, and to allow for translation without back-off models. At the same time, we also want a compact representation of the text itself, since an increase in text length reduces efï¬ciency and increases the distances over which neural models need to pass information.
A simple method to manipulate the trade-off be- tween vocabulary size and text size is to use short- lists of unsegmented words, using subword units only for rare words. As an alternative, we pro- pose a segmentation algorithm based on byte pair encoding (BPE), which lets us learn a vocabulary that provides a good compression rate of the text.
# 3.2 Byte Pair Encoding (BPE)
Byte Pair Encoding (BPE) (Gage, 1994) is a sim- ple data compression technique that iteratively re- places the most frequent pair of bytes in a se- quence with a single, unused byte. We adapt this algorithm for word segmentation. Instead of merg- ing frequent pairs of bytes, we merge characters or character sequences. | 1508.07909#11 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 12 | Firstly, we initialize the symbol vocabulary with the character vocabulary, and represent each word as a sequence of characters, plus a special end-of- word symbol â·â, which allows us to restore the original tokenization after translation. We itera- tively count all symbol pairs and replace each oc- currence of the most frequent pair (âAâ, âBâ) with a new symbol âABâ. Each merge operation pro- duces a new symbol which represents a charac- ter n-gram. Frequent character n-grams (or whole words) are eventually merged into a single sym- bol, thus BPE requires no shortlist. The ï¬nal sym- bol vocabulary size is equal to the size of the initial vocabulary, plus the number of merge operations â the latter is the only hyperparameter of the algo- rithm.
For efï¬ciency, we do not consider pairs that cross word boundaries. The algorithm can thus be run on the dictionary extracted from a text, with each word being weighted by its frequency. A minimal Python implementation is shown in Al
# Algorithm 1 Learn BPE operations
import re, collections | 1508.07909#12 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 13 | # Algorithm 1 Learn BPE operations
import re, collections
def get_stats(vocab): pairs = collections.defaultdict(int) for word, freq in vocab.items(): symbols = word.split() for i in range(len(symbols)-1): pairs[symbols[i],symbols[i+1]] += freq return pairs def merge_vocab(pair, v_in): v_out = {} bigram = re.escape(' '.join(pair)) p = re.compile(r'(?<!\S)' + bigram + r'(?!\S)') for word in v_in: w_out = p.sub(''.join(pair), word) v_out[w_out] = v_in[word] return v_out vocab = {'l o w </w>' : 5, 'l o w e r </w>' : 2, 'n e w e s t </w>':6, 'w i d e s t </w>':3} num_merges = 10 for i in range(num_merges): pairs = get_stats(vocab) best = max(pairs, key=pairs.get) vocab = merge_vocab(best, vocab) print(best) | 1508.07909#13 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 14 | r · l o lo w â low e r· â er·
Figure 1: BPE merge operations learned from dic- tionary {âlowâ, âlowestâ, ânewerâ, âwiderâ}.
gorithm 1. In practice, we increase efï¬ciency by indexing all pairs, and updating data structures in- crementally.
The main difference to other compression al- gorithms, such as Huffman encoding, which have been proposed to produce a variable-length en- coding of words for NMT (Chitnis and DeNero, 2015), is that our symbol sequences are still in- terpretable as subword units, and that the network can generalize to translate and produce new words (unseen at training time) on the basis of these sub- word units.
Figure 1 shows a toy example of learned BPE operations. At test time, we ï¬rst split words into sequences of characters, then apply the learned op- erations to merge the characters into larger, known symbols. This is applicable to any word, and allows for open-vocabulary networks with ï¬xed symbol vocabularies.3 In our example, the OOV âlowerâ would be segmented into âlow er·â. | 1508.07909#14 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 15 | 3The only symbols that will be unknown at test time are unknown characters, or symbols of which all occurrences in the training text have been merged into larger symbols, like âsafeguarâ, which has all occurrences in our training text merged into âsafeguardâ. We observed no such symbols at test time, but the issue could be easily solved by recursively reversing speciï¬c merges until all symbols are known. | 1508.07909#15 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 16 | We evaluate two methods of applying BPE: learning two independent encodings, one for the source, one for the target vocabulary, or learning the encoding on the union of the two vocabular- ies (which we call joint BPE).4 The former has the advantage of being more compact in terms of text and vocabulary size, and having stronger guaran- tees that each subword unit has been seen in the training text of the respective language, whereas the latter improves consistency between the source and the target segmentation. If we apply BPE in- dependently, the same name may be segmented differently in the two languages, which makes it harder for the neural models to learn a mapping between the subword units. To increase the con- sistency between English and Russian segmenta- tion despite the differing alphabets, we transliter- ate the Russian vocabulary into Latin characters with ISO-9 to learn the joint BPE encoding, then transliterate the BPE merge operations back into Cyrillic to apply them to the Russian training text.5
# 4 Evaluation
We aim to answer the following empirical ques- tions:
⢠Can we improve the translation of rare and unseen words in neural machine translation by representing them via subword units?
⢠Which segmentation into subword units per- forms best in terms of vocabulary size, text size, and translation quality? | 1508.07909#16 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 17 | ⢠Which segmentation into subword units per- forms best in terms of vocabulary size, text size, and translation quality?
We perform experiments on data from the shared translation task of WMT 2015. For EnglishâGerman, our training set consists of 4.2 million sentence pairs, or approximately 100 mil- lion tokens. For EnglishâRussian, the training set consists of 2.6 million sentence pairs, or approx- imately 50 million tokens. We tokenize and true- case the data with the scripts provided in Moses (Koehn et al., 2007). We use newstest2013 as de- velopment set, and report results on newstest2014 and newstest2015.
We report results with BLEU (mteval-v13a.pl), and CHRF3 (Popovi´c, 2015), a character n-gram F3 score which was found to correlate well with | 1508.07909#17 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 18 | 4In practice, we simply concatenate the source and target side of the training set to learn joint BPE.
5Since the Russian training text also contains words that use the Latin alphabet, we also apply the Latin BPE opera- tions.
human judgments, especially for translations out of English (Stanojevi´c et al., 2015). Since our main claim is concerned with the translation of rare and unseen words, we report separate statis- tics for these. We measure these through unigram F1, which we calculate as the harmonic mean of clipped unigram precision and recall.6
We perform all experiments with Groundhog7 (Bahdanau et al., 2015). We generally follow set- tings by previous work (Bahdanau et al., 2015; Jean et al., 2015). All networks have a hidden layer size of 1000, and an embedding layer size of 620. Following Jean et al. (2015), we only keep a shortlist of Ï = 30000 words in memory. | 1508.07909#18 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 19 | During training, we use Adadelta (Zeiler, 2012), a minibatch size of 80, and reshufï¬e the train- ing set between epochs. We train a network for approximately 7 days, then take the last 4 saved models (models being saved every 12 hours), and continue training each with a ï¬xed embedding layer (as suggested by (Jean et al., 2015)) for 12 hours. We perform two independent training runs for each models, once with cut-off for gradient clipping (Pascanu et al., 2013) of 5.0, once with a cut-off of 1.0 â the latter produced better single models for most settings. We report results of the system that performed best on our development set (newstest2013), and of an ensemble of all 8 mod- els.
We use a beam size of 12 for beam search, with probabilities normalized by sentence length. We use a bilingual dictionary based on fast-align (Dyer et al., 2013). For our baseline, this serves as back-off dictionary for rare words. We also use the dictionary to speed up translation for all ex- periments, only performing the softmax over a ï¬l- tered list of candidate translations (like Jean et al. (2015), we use K = 30000; K â² = 10).
# 4.1 Subword statistics | 1508.07909#19 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 20 | # 4.1 Subword statistics
Apart from translation quality, which we will ver- ify empirically, our main objective is to represent an open vocabulary through a compact ï¬xed-size subword vocabulary, and allow for efï¬cient train- ing and decoding.8
Statistics for different segmentations of the Ger6Clipped unigram precision is essentially 1-gram BLEU without brevity penalty.
7github.com/sebastien-j/LV_groundhog 8The time complexity of encoder-decoder architectures is at least linear to sequence length, and oversplitting harms ef- ï¬ciency. | 1508.07909#20 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 21 | man side of the parallel data are shown in Table 1. A simple baseline is the segmentation of words into character n-grams.9 Character n-grams allow for different trade-offs between sequence length (# tokens) and vocabulary size (# types), depend- ing on the choice of n. The increase in sequence length is substantial; one way to reduce sequence length is to leave a shortlist of the k most frequent word types unsegmented. Only the unigram repre- sentation is truly open-vocabulary. However, the unigram representation performed poorly in pre- liminary experiments, and we report translation re- sults with a bigram representation, which is empir- ically better, but unable to produce some tokens in the test set with the training set vocabulary.
We report statistics for several word segmenta- tion techniques that have proven useful in previous SMT research, including frequency-based com- pound splitting (Koehn and Knight, 2003), rule- based hyphenation (Liang, 1983), and Morfessor (Creutz and Lagus, 2002). We ï¬nd that they only moderately reduce vocabulary size, and do not solve the unknown word problem, and we thus ï¬nd them unsuitable for our goal of open-vocabulary translation without back-off dictionary. | 1508.07909#21 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 22 | BPE meets our goal of being open-vocabulary, and the learned merge operations can be applied to the test set to obtain a segmentation with no unknown symbols.10 Its main difference from the character-level model is that the more com- pact representation of BPE allows for shorter se- quences, and that the attention model operates on variable-length units.11 Table 1 shows BPE with 59 500 merge operations, and joint BPE with 89 500 operations.
In practice, we did not include infrequent sub- word units in the NMT network vocabulary, since there is noise in the subword symbol sets, e.g. because of characters from foreign alphabets. Hence, our network vocabularies in Table 2 are typically slightly smaller than the number of types in Table 1. | 1508.07909#22 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 23 | 9Our character n-grams do not cross word boundaries. We mark whether a subword is word-ï¬nal or not with a special character, which allows us to restore the original tokenization. 10Joint BPE can produce segments that are unknown be- cause they only occur in the English training text, but these are rare (0.05% of test tokens).
11We highlighted the limitations of word-level attention in section 3.1. At the other end of the spectrum, the character level is suboptimal for alignment (Tiedemann, 2009).
BLEU vocabulary CHRF3 target single ens-8 single ens-8 segmentation shortlist source name syntax-based (Sennrich and Haddow, 2015) WUnk WDict C2-50k BPE-60k BPE BPE-J90k BPE (joint) 55.3 47.2 50.5 51.9 52.0 51.7 24.4 20.6 22.0 22.8 21.5 22.8 - 22.8 24.2 25.3 24.5 24.7 - - char-bigram - 300 000 500 000 - 300 000 500 000 60 000 60 000 90 000 60 000 60 000 90 000 50 000 - | 1508.07909#23 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 24 | Table 2: EnglishâGerman translation performance (BLEU, CHRF3 and unigram F1) on newstest2015. Ens-8: ensemble of 8 models. Best NMT system in bold. Unigram F1 (with ensembles) is computed for all words (n = 44085), rare words (not among top 50 000 in training set; n = 2900), and OOVs (not in training set; n = 1168).
# tokens # types 100 m 1 750 000 3000 550 m 20 000 306 m 214 m 120 000 102 m 1 100 000 544 000 109 m 404 000 186 m 63 000 112 m 82 000 111 m # UNK 1079 0 34 59 643 237 230 0 32 129 m 69 000 34
Unigram F1 scores indicate that learning the BPE symbols on the vocabulary union (BPE- J90k) is more effective than learning them sep- arately (BPE-60k), and more effective than using character bigrams with a shortlist of 50 000 unseg- mented words (C2-50k), but all reported subword segmentations are viable choices and outperform the back-off dictionary baseline. | 1508.07909#24 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 25 | Table 1: Corpus statistics for German training corpus with different word segmentation tech- niques. #UNK: number of unknown tokens in newstest2013. â³: (Koehn and Knight, 2003); *: (Creutz and Lagus, 2002); â: (Liang, 1983).
# 4.2 Translation experiments
EnglishâGerman translation results are shown in Table 2; EnglishâRussian results in Table 3.
Our baseline WDict is a word-level model with a back-off dictionary. It differs from WUnk in that the latter uses no back-off dictionary, and just rep- resents out-of-vocabulary words as UNK12. The back-off dictionary improves unigram F1 for rare and unseen words, although the improvement is smaller for EnglishâRussian, since the back-off dictionary is incapable of transliterating names.
All subword systems operate without a back-off dictionary. We ï¬rst focus on unigram F1, where all systems improve over the baseline, especially for rare words (36.8%â41.8% for ENâDE; 26.5%â29.7% for ENâRU). For OOVs, the baseline strategy of copying unknown words works well for EnglishâGerman. However, when alphabets differ, the subword models do much better. | 1508.07909#25 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 26 | 12We use UNK for words that are outside the model vo- cabulary, and OOV for those that do not occur in the training text.
Our subword representations cause big im- provements in the translation of rare and unseen words, but these only constitute 9-11% of the test sets. Since rare words tend to carry central in- formation in a sentence, we suspect that BLEU and CHRF3 underestimate their effect on transla- tion quality. Still, we also see improvements over the baseline in total unigram F1, as well as BLEU and CHRF3, and the subword ensembles outper- form the WDict baseline by 0.3â1.3 BLEU and 0.6â2 CHRF3. There is some inconsistency be- tween BLEU and CHRF3, which we attribute to the fact that BLEU has a precision bias, and CHRF3 a recall bias. | 1508.07909#26 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 27 | For EnglishâGerman, we observe the best BLEU score of 25.3 with C2-50k, but the best CHRF3 score of 54.1 with BPE-J90k. For com- parison to the (to our knowledge) best non-neural MT system on this data set, we report syntax- based SMT results (Sennrich and Haddow, 2015). We observe that our best systems outperform the syntax-based system in terms of BLEU, but not in terms of CHRF3. Regarding other neural sys- tems, Luong et al. (2015a) report a BLEU score of 25.9 on newstest2015, but we note that they use an ensemble of 8 independently trained models, and also report strong improvements from applying dropout, which we did not use. We are conï¬dent that our improvements to the translation of rare words are orthogonal to improvements achievable through other improvements in the network architecture, training algorithm, or better ensembles.
For EnglishâRussian, | 1508.07909#27 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 28 | For EnglishâRussian,
the state of the art is the phrase-based system by Haddow et al. (2015). It outperforms our WDict baseline by 1.5 BLEU. The subword models are a step towards closing this gap, and BPE-J90k yields an improvement of 1.3 BLEU, and 2.0 CHRF3, over WDict.
As a further comment on our translation results, we want to emphasize that performance variabil- ity is still an open problem with NMT. On our de- velopment set, we observe differences of up to 1 BLEU between different models. For single sys- tems, we report the results of the model that per- forms best on dev (out of 8), which has a stabi- lizing effect, but how to control for randomness deserves further attention in future research.
# 5 Analysis
# 5.1 Unigram accuracy | 1508.07909#28 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 29 | # 5 Analysis
# 5.1 Unigram accuracy
Our main claims are that the translation of rare and unknown words is poor in word-level NMT mod- els, and that subword models improve the trans- lation of these word types. To further illustrate the effect of different subword segmentations on the translation of rare and unseen words, we plot target-side words sorted by their frequency in the training set.13 To analyze the effect of vocabulary size, we also include the system C2-3/500k, which is a system with the same vocabulary size as the WDict baseline, and character bigrams to repre- sent unseen words.
Figure 2 shows results for the EnglishâGerman ensemble systems on newstest2015. Unigram F1 of all systems tends to decrease for lower- frequency words. The baseline system has a spike in F1 for OOVs, i.e. words that do not occur in the training text. This is because a high propor- tion of OOVs are names, for which a copy from the source to the target text is a good strategy for EnglishâGerman. | 1508.07909#29 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 30 | The systems with a target vocabulary of 500 000 words mostly differ in how well they translate words with rank > 500 000. A back-off dictionary is an obvious improvement over producing UNK, but the subword system C2-3/500k achieves better performance. Note that all OOVs that the back- off dictionary produces are words that are copied from the source, usually names, while the subword
13We perform binning of words with the same training set frequency, and apply bezier smoothing to the graph.
systems can productively form new words such as compounds. | 1508.07909#30 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 31 | systems can productively form new words such as compounds.
For the 50 000 most frequent words, the repre- sentation is the same for all neural networks, and all neural networks achieve comparable unigram F1 for this category. For the interval between fre- quency rank 50 000 and 500 000, the comparison between C2-3/500k and C2-50k unveils an inter- esting difference. The two systems only differ in the size of the shortlist, with C2-3/500k represent- ing words in this interval as single units, and C2- 50k via subword units. We ï¬nd that the perfor- mance of C2-3/500k degrades heavily up to fre- quency rank 500 000, at which point the model switches to a subword representation and perfor- mance recovers. The performance of C2-50k re- mains more stable. We attribute this to the fact that subword units are less sparse than words. In our training set, the frequency rank 50 000 corre- sponds to a frequency of 60 in the training data; the frequency rank 500 000 to a frequency of 2. Because subword representations are less sparse, reducing the size of the network vocabulary, and representing more words via subword units, can lead to better performance. | 1508.07909#31 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 32 | The F1 numbers hide some qualitative differ- ences between systems. For EnglishâGerman, WDict produces few OOVs (26.5% recall), but with high precision (60.6%) , whereas the subword systems achieve higher recall, but lower precision. We note that the character bigram model C2-50k produces the most OOV words, and achieves rel- atively low precision of 29.1% for this category. However, it outperforms the back-off dictionary in recall (33.0%). BPE-60k, which suffers from transliteration (or copy) errors due to segmenta- tion inconsistencies, obtains a slightly better pre- cision (32.4%), but a worse recall (26.6%). In con- trast to BPE-60k, the joint BPE encoding of BPE- J90k improves both precision (38.6%) and recall (29.8%).
For EnglishâRussian, unknown names can only rarely be copied, and usually require translit- eration. Consequently, the WDict baseline per- forms more poorly for OOVs (9.2% precision; 5.2% recall), and the subword models improve both precision and recall (21.9% precision and 15.6% recall for BPE-J90k). The full unigram F1 plot is shown in Figure 3. | 1508.07909#32 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 33 | CHRF3 BLEU vocabulary unigram F1 (%) rare OOV all 16.5 - 56.0 31.3 0.0 49.9 54.2 25.2 6.6 51.0 54.8 26.5 17.4 51.6 55.2 27.8 15.6 52.7 55.3 29.7 18.3 53.0 55.8 29.7 source target single ens-8 single ens-8 segmentation shortlist name phrase-based (Haddow et al., 2015) WUnk WDict C2-50k BPE-60k BPE BPE-J90k BPE (joint) 53.8 46.5 47.5 49.0 49.8 49.7 24.3 18.8 19.1 20.9 20.5 20.4 - 22.4 22.8 24.1 23.6 24.1 - - char-bigram - 300 000 500 000 - 300 000 500 000 60 000 60 000 60 000 60 000 90 000 100 000 50 000 - Table 3: EnglishâRussian translation performance (BLEU, CHRF3 and unigram F1) on newstest2015. Ens-8: ensemble of 8 models. Best NMT system in bold. Unigram F1 (with ensembles) is computed for all words (n = 55654), | 1508.07909#33 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 36 | Figure 2: EnglishâGerman unigram F1 on new- stest2015 plotted by training set frequency rank for different NMT systems.
Table 4 shows two translation examples for the translation direction EnglishâGerman, Ta- ble 5 for EnglishâRussian. The baseline sys- tem fails for all of the examples, either by delet- ing content (health), or by copying source words that should be translated or transliterated. The subword translations of health research insti- tutes show that the subword systems are capa- ble of learning translations when oversplitting (re- searchâFo|rs|ch|un|g), or when the segmentation does not match morpheme boundaries: the seg- mentation Forschungs|instituten would be linguis- tically more plausible, and simpler to align to the English research institutes, than the segmentation Forsch|ungsinstitu|ten in the BPE-60k system, but still, a correct translation is produced. If the sys- tems have failed to learn a translation due to data sparseness, like for asinine, which should be trans- lated as dumm, we see translations that are wrong, but could be plausible for (partial) loanwords (asi- nine SituationâAsinin-Situation). | 1508.07909#36 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 38 | Figure 3: EnglishâRussian unigram F1 on new- stest2015 plotted by training set frequency rank for different NMT systems.
The EnglishâRussian examples show that the subword systems are capable of translitera- tion. However, transliteration errors do occur, either due to ambiguous transliterations, or be- cause of non-consistent segmentations between source and target text which make it hard for the system to learn a transliteration mapping. Note that the BPE-60k system encodes Mirza- yeva inconsistently for the two language pairs (Mirz|ayevaâÐиÑ|за|ева Mir|za|eva). This ex- ample is still translated correctly, but we observe spurious insertions and deletions of characters in the BPE-60k system. An example is the translit- eration of rakï¬sk, where a п is inserted and a к is deleted. We trace this error back to transla- tion pairs in the training data with inconsistent segmentations, such as (p|rak|ri|tiâпÑа|кÑиÑ|и | 1508.07909#38 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 39 | sentence system health research institutes source reference Gesundheitsforschungsinstitute Forschungsinstitute WDict C2-50k Fo|rs|ch|un|gs|in|st|it|ut|io|ne|n Gesundheits|forsch|ungsinstitu|ten BPE-60k Gesundheits|forsch|ungsin|stitute BPE-J90k asinine situation source reference dumme Situation asinine situation â UNK â asinine WDict as|in|in|e situation â As|in|en|si|tu|at|io|n C2-50k as|in|ine situation â A|in|line-|Situation BPE-60k BPE-J90K as|in|ine situation â As|in|in-|Situation
Table 4: EnglishâGerman translation example. â|â marks subword boundaries. | 1508.07909#39 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 40 | Table 4: EnglishâGerman translation example. â|â marks subword boundaries.
sentence system source Mirzayeva reference ÐиÑзаева (Mirzaeva) Mirzayeva â UNK â Mirzayeva WDict Mi|rz|ay|ev|a â Ðи|Ñз|ае|ва (Mi|rz|ae|va) C2-50k BPE-60k Mirz|ayeva â ÐиÑ|за|ева (Mir|za|eva) BPE-J90k Mir|za|yeva â ÐиÑ|за|ева (Mir|za|eva) source reference WDict C2-50k BPE-60k BPE-J90k
Table 5: EnglishâRussian translation examples. â|â marks subword boundaries.
(pra|krit|i)), from which the translation (rakâпÑа) is erroneously learned. The segmentation of the joint BPE system (BPE-J90k) is more consistent (pra|krit|iâпÑа|кÑиÑ|и (pra|krit|i)).
# 6 Conclusion | 1508.07909#40 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 41 | # 6 Conclusion
The main contribution of this paper is that we show that neural machine translation systems are capable of open-vocabulary translation by repre- senting rare and unseen words as a sequence of subword units.14 This is both simpler and more effective than using a back-off translation model. We introduce a variant of byte pair encoding for word segmentation, which is capable of encod- ing open vocabularies with a compact symbol vo- cabulary of variable-length subword units. We show performance gains over the baseline with both BPE segmentation, and a simple character bi- gram segmentation.
Our analysis shows that not only out-of- vocabulary words, but also rare in-vocabulary words are translated poorly by our baseline NMT | 1508.07909#41 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 42 | the segmentation algorithms is available at https://github.com/rsennrich/ subword-nmt.
system, and that reducing the vocabulary size of subword models can actually improve perfor- mance. In this work, our choice of vocabulary size is somewhat arbitrary, and mainly motivated by comparison to prior work. One avenue of future research is to learn the optimal vocabulary size for a translation task, which we expect to depend on the language pair and amount of training data, au- tomatically. We also believe there is further po- tential in bilingually informed segmentation algo- rithms to create more alignable subword units, al- though the segmentation algorithm cannot rely on the target text at runtime.
While the relative effectiveness will depend on language-speciï¬c factors such as vocabulary size, we believe that subword segmentations are suit- able for most language pairs, eliminating the need for large NMT vocabularies or back-off models.
# Acknowledgments | 1508.07909#42 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 43 | # Acknowledgments
We thank Maja Popovi´c for her implementa- tion of CHRF, with which we veriï¬ed our re- implementation. The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. - Sam- sung R&D Institute Poland. This project received funding from the European Unionâs Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Represen- tations (ICLR).
Issam Bazzi and James R. Glass. 2000. Modeling out- of-vocabulary words for robust speech recognition. In Sixth International Conference on Spoken Lan- guage Processing, ICSLP 2000 / INTERSPEECH 2000, pages 401â404, Beijing, China.
Jan A. Botha and Phil Blunsom. 2014. Compositional Morphology for Word Representations and Lan- guage Modelling. In Proceedings of the 31st Inter- national Conference on Machine Learning (ICML), Beijing, China. | 1508.07909#43 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 44 | Rohan Chitnis and John DeNero. 2015. Variable- Length Word Encodings for Neural Translation Models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoderâ Decoder for Statistical Machine Translation. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1724â1734, Doha, Qatar. Association for Computational Linguistics.
Mathias Creutz and Krista Lagus. 2002. Unsupervised Discovery of Morphemes. In Proceedings of the ACL-02 Workshop on Morphological and Phonolog- ical Learning, pages 21â30. Association for Compu- tational Linguistics. | 1508.07909#44 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 45 | Nadir Durrani, Hassan Sajjad, Hieu Hoang, and Philipp Koehn. 2014. Integrating an Unsupervised Translit- eration Model into Statistical Machine Translation. In Proceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, EACL 2014, pages 148â153, Gothen- burg, Sweden.
Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparame- terization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644â648, At- lanta, Georgia. Association for Computational Lin- guistics.
Philip Gage. 1994. A New Algorithm for Data Com- pression. C Users J., 12(2):23â38, February.
Barry Haddow, Matthias Huck, Alexandra Birch, Niko- lay Bogoychev, and Philipp Koehn. 2015. The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126â133, Lisbon, Portugal. Association for Computational Linguistics. | 1508.07909#45 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 46 | Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1â10, Beijing, China. Association for Computa- tional Linguistics.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing, Seattle. Association for Computational Linguistics.
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-Aware Neural Lan- guage Models. CoRR, abs/1508.06615.
Philipp Koehn and Kevin Knight. 2003. Empirical In EACL â03: Methods for Compound Splitting. Proceedings of the Tenth Conference on European Chapter of the Association for Computational Lin- guistics, pages 187â193, Budapest, Hungary. Asso- ciation for Computational Linguistics. | 1508.07909#46 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 47 | Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177â180, Prague, Czech Republic. Association for Computational Linguistics.
Franklin M. Liang. 1983. Word hy-phen-a-tion by com-put-er. Ph.D. thesis, Stanford University, De- partment of Linguistics, Stanford, CA.
Wang Ling, Chris Dyer, Alan W. Black, Isabel Tran- coso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015a. Finding Function in Form: Compositional Character Models for Open Vocab- ulary Word Representation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 1520â 1530, Lisbon, Portugal. Association for Computa- tional Linguistics. | 1508.07909#47 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 48 | Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015b. Character-based Neural Machine Translation. ArXiv e-prints, November.
Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Soï¬a, Bulgaria, August 8-9, 2013, pages 104â 113.
Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015a. Effective Approaches to Attention- based Neural Machine Translation. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1412â 1421, Lisbon, Portugal. Association for Computa- tional Linguistics.
Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the Rare Word Problem in Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 11â19, Beijing, China. Association for Computa- tional Linguistics. | 1508.07909#48 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 49 | Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai- Son Le, Stefan Kombrink, and Jan Cernocký. 2012. Subword Language Modeling with Neural Net- works. Unpublished.
Graham Neubig, Taro Watanabe, Shinsuke Mori, and 2012. Machine Translation Tatsuya Kawahara. without Words through Substring Alignment. In The 50th Annual Meeting of the Association for Compu- tational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 1: Long Papers, pages 165â174.
Improving SMT quality with morpho-syntactic analysis. In 18th Int. Conf. on Computational Linguistics, pages 1081â1085.
Razvan Pascanu, Tomas Mikolov, and Yoshua Ben- gio. 2013. On the difï¬culty of training recurrent neural networks. In Proceedings of the 30th Inter- national Conference on Machine Learning, ICML 2013, pages 1310â1318, Atlanta, USA.
Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392â395, Lisbon, Portugal. Association for Computational Linguistics. | 1508.07909#49 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 50 | Rico Sennrich and Barry Haddow. 2015. A Joint Dependency Model of Morphological and Syntac- tic Structure for Statistical Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2081â2087, Lisbon, Portugal. Association for Com- putational Linguistics.
Benjamin Snyder and Regina Barzilay. 2008. Unsu- pervised Multilingual Learning for Morphological Segmentation. In Proceedings of ACL-08: HLT, pages 737â745, Columbus, Ohio. Association for Computational Linguistics.
Jacob Devlin, Michael Kayser, Yoong Keok Lee, and Regina Barzilay. 2012. Unsu- pervised Morphology Rivals Supervised Morphol- ogy for Arabic MT. In The 50th Annual Meeting of the Association for Computational Linguistics, Pro- ceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 2: Short Papers, pages 322â 327.
MiloÅ¡ Stanojevi´c, Amir Kamran, Philipp Koehn, and OndËrej Bojar. 2015. Results of the WMT15 Met- rics Shared Task. In Proceedings of the Tenth Work- shop on Statistical Machine Translation, pages 256â 273, Lisbon, Portugal. Association for Computa- tional Linguistics. | 1508.07909#50 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.07909 | 51 | Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural Infor- mation Processing Systems 2014, pages 3104â3112, Montreal, Quebec, Canada.
Jörg Tiedemann. 2009. Character-based PSMT for Closely Related Languages. In Proceedings of 13th Annual Conference of the European Association for Machine Translation (EAMTâ09), pages 12â19.
Jörg Tiedemann. 2012. Character-Based Pivot Trans- lation for Under-Resourced Languages and Do- mains. In Proceedings of the 13th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 141â151, Avignon, France. Association for Computational Linguistics.
David Vilar, Jan-Thorsten Peter, and Hermann Ney. 2007. Can We Translate Letters? In Second Work- shop on Statistical Machine Translation, pages 33â 39, Prague, Czech Republic. Association for Com- putational Linguistics. | 1508.07909#51 | Neural Machine Translation of Rare Words with Subword Units | Neural machine translation (NMT) models typically operate with a fixed
vocabulary, but translation is an open-vocabulary problem. Previous work
addresses the translation of out-of-vocabulary words by backing off to a
dictionary. In this paper, we introduce a simpler and more effective approach,
making the NMT model capable of open-vocabulary translation by encoding rare
and unknown words as sequences of subword units. This is based on the intuition
that various word classes are translatable via smaller units than words, for
instance names (via character copying or transliteration), compounds (via
compositional translation), and cognates and loanwords (via phonological and
morphological transformations). We discuss the suitability of different word
segmentation techniques, including simple character n-gram models and a
segmentation based on the byte pair encoding compression algorithm, and
empirically show that subword models improve over a back-off dictionary
baseline for the WMT 15 translation tasks English-German and English-Russian by
1.1 and 1.3 BLEU, respectively. | http://arxiv.org/pdf/1508.07909 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted at ACL 2016; new in this version: figure 3 | null | cs.CL | 20150831 | 20160610 | [] |
1508.06491 | 0 | 7 1 0 2
r p A 2 1 ] L C . s c [
2 v 1 9 4 6 0 . 8 0 5 1 : v i X r a
# Alignment-Based Compositional Semantics for Instruction Following
Jacob Andreas and Dan Klein Computer Science Division University of California, Berkeley {jda,klein}@cs.berkeley.edu
# Abstract
This paper describes an alignment-based model for interpreting natural language in- structions in context. We approach in- struction following as a search over plans, scoring sequences of actions conditioned on structured observations of text and the environment. By explicitly modeling both the low-level compositional structure of individual actions and the high-level struc- ture of full plans, we are able to learn both grounded representations of sentence meaning and pragmatic constraints on in- terpretation. To demonstrate the modelâs ï¬exibility, we apply it to a diverse set of benchmark tasks. On every task, we outperform strong task-speciï¬c baselines, and achieve several new state-of-the-art results.
atomic actions. Within each sentenceâaction pair, the model infers a structure-to-structure alignment between the syntax of the sentence and a graph- based representation of the action. | 1508.06491#0 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06491 | 1 | atomic actions. Within each sentenceâaction pair, the model infers a structure-to-structure alignment between the syntax of the sentence and a graph- based representation of the action.
At a high level, our agent is a block-structured, graph-valued conditional random ï¬eld, with align- ment potentials to relate instructions to actions and transition potentials to encode the environment model (Figure 3). Explicitly modeling sequence- to-sequence alignments between text and actions allows ï¬exible reasoning about action sequences, enabling the agent to determine which actions are speciï¬ed (perhaps redundantly) by text, and which actions must be performed automatically (in or- der to satisfy pragmatic constraints on interpreta- tion). Treating instruction following as a sequence prediction problem, rather than a series of inde- pendent decisions (Branavan et al., 2009; Artzi and Zettlemoyer, 2013), makes it possible to use general-purpose planning machinery, greatly in- creasing inferential power.
# Introduction | 1508.06491#1 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 1 | # âCourant Institute of Mathematical Sciences New York University {jernite,dsontag}@cs.nyu.edu
# Abstract
We describe a simple neural language model that re- lies only on character-level inputs. Predictions are still made at the word-level. Our model employs a con- volutional neural network (CNN) and a highway net- is given to a work over characters, whose output long short-term memory (LSTM) recurrent neural net- work language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model out- performs word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufï¬cient for lan- guage modeling. Analysis of word representations ob- tained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information. | 1508.06615#1 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 2 | # Introduction
In instruction-following tasks, an agent executes a sequence of actions in a real or simulated envi- ronment, in response to a sequence of natural lan- guage commands. Examples include giving nav- igational directions to robots and providing hints to automated game-playing agents. Plans speci- ï¬ed with natural language exhibit compositional- ity both at the level of individual actions and at the overall sequence level. This paper describes a framework for learning to follow instructions by leveraging structure at both levels.
Our primary contribution is a new, alignment- based approach to grounded compositional se- mantics. Building on related logical approaches (Reddy et al., 2014; Pourdamghani et al., 2014), we recast instruction following as a pair of nested, structured alignment problems. Given instructions and a candidate plan, the model infers a sequence- to-sequence alignment between sentences and | 1508.06491#2 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 2 | Introduction Language modeling is a fundamental task in artiï¬cial intel- ligence and natural language processing (NLP), with appli- cations in speech recognition, text generation, and machine translation. A language model is formalized as a probability distribution over a sequence of strings (words), and tradi- tional methods usually involve making an n-th order Markov assumption and estimating n-gram probabilities via count- ing and subsequent smoothing (Chen and Goodman 1998). The count-based models are simple to train, but probabilities of rare n-grams can be poorly estimated due to data sparsity (despite smoothing techniques).
Neural Language Models (NLM) address the n-gram data sparsity issue through parameterization of words as vectors (word embeddings) and using them as inputs to a neural net- work (Bengio, Ducharme, and Vincent 2003; Mikolov et al. 2010). The parameters are learned as part of the training process. Word embeddings obtained through NLMs exhibit the property whereby semantically close words are likewise close in the induced vector space (as is the case with non- neural techniques such as Latent Semantic Analysis (Deer- wester, Dumais, and Harshman 1990)). | 1508.06615#2 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 3 | The fragment of semantics necessary to com- plete most instruction-following tasks is essen- tially predicateâargument structure, with limited inï¬uence from quantiï¬cation and scoping. Thus the problem of sentence interpretation can reason- ably be modeled as one of ï¬nding an alignment be- tween language and the environment it describes. We allow this structure-to-structure alignmentâ an âoverlayâ of language onto the worldâto be mediated by linguistic structure (in the form of dependency parses) and structured perception (in what we term grounding graphs). Our model thereby reasons directly about the relationship be- tween language and observations of the environ- ment, without the need for an intermediate logi- cal representation of sentence meaning. This, in turn, makes it possible to incorporate ï¬exible fea- ture representations that have been difï¬cult to in- tegrate with previous work in semantic parsing.
We apply our approach to three established
white water
(aed
1) | a eS
. . . right round the white water but stay quite close âcause you donât otherwise youâre going to be in that stone creek . . .
Go down the yellow hall. Turn left at the intersection of the yellow and the gray.
Clear the right column. Then the other column. Then the row.
(a) Map reading | 1508.06491#3 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 3 | While NLMs have been shown to outperform count-based n-gram language models (Mikolov et al. 2011), they are blind to subword information (e.g. morphemes). For exam- ple, they do not know, a priori, that eventful, eventfully, un- eventful, and uneventfully should have structurally related embeddings in the vector space. Embeddings of rare words can thus be poorly estimated, leading to high perplexities for rare words (and words surrounding them). This is espe- cially problematic in morphologically rich languages with long-tailed frequency distributions or domains with dynamic vocabularies (e.g. social media). | 1508.06615#3 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 4 | Go down the yellow hall. Turn left at the intersection of the yellow and the gray.
Clear the right column. Then the other column. Then the row.
(a) Map reading
(b) Maze navigation
(c) Puzzle solving
Figure 1: Example tasks handled by our framework. The tasks feature noisy text, over- and under-speciï¬cation of plans, and challenging search problems.
instruction-following benchmarks: the map read- ing task of Vogel and Jurafsky (2010), the maze navigation task of MacMahon et al. (2006), and the puzzle solving task of Branavan et al. (2009). An example from each is shown in Figure 1. These benchmarks exhibit a range of qualitative propertiesâboth in the length and complexity of their plans, and in the quantity and quality of ac- companying language. Each task has been stud- ied in isolation, but we are unaware of any pub- lished approaches capable of robustly handling all three. Our general model outperforms strong, task-speciï¬c baselines in each case, achieving relative error reductions of 15â20% over sev- eral state-of-the-art results. Experiments demon- strate the importance of our contributions in both compositional semantics and search over plans. We have released all code for this project at github.com/jacobandreas/instructions. | 1508.06491#4 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 4 | In this work, we propose a language model that lever- ages subword information through a character-level con- volutional neural network (CNN), whose output is used as an input to a recurrent neural network language model (RNN-LM). Unlike previous works that utilize subword in- formation via morphemes (Botha and Blunsom 2014; Lu- ong, Socher, and Manning 2013), our model does not require morphological tagging as a pre-processing step. And, unlike the recent line of work which combines input word embed- dings with features from a character-level model (dos Santos and Zadrozny 2014; dos Santos and Guimaraes 2015), our model does not utilize word embeddings at all in the input layer. Given that most of the parameters in NLMs are from the word embeddings, the proposed model has signiï¬cantly fewer parameters than previous NLMs, making it attractive for applications where model size may be an issue (e.g. cell phones).
To summarize, our contributions are as follows:
⢠on English, we achieve results on par with the existing state-of-the-art on the Penn Treebank (PTB), despite hav- ing approximately 60% fewer parameters, and | 1508.06615#4 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 5 | trained to maximize a reward signal provided by black-box execution of the predicted command in the environment. (It is possible to think of response-based learning for question answering (Liang et al., 2013) as a special case.)
This approach uses a well-studied mechanism for compositional interpretation of language, but is subject to certain limitations. Because the environ- ment is manipulated only through black-box exe- cution of the completed semantic parse, there is no way to incorporate current or future environment state into the scoring function. It is also in general necessary to hand-engineer a task-speciï¬c formal language for describing agent behavior. Thus it is extremely difï¬cult to work with environments that cannot be modeled with a ï¬xed inventory of pred- icates (e.g. those involving novel strings or arbi- trary real quantities).
# 2 Related work
Existing work on instruction following can be semantic roughly divided into two families: parsers and linear policy estimators. | 1508.06491#5 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 5 | ⢠on English, we achieve results on par with the existing state-of-the-art on the Penn Treebank (PTB), despite hav- ing approximately 60% fewer parameters, and
⢠on morphologically rich languages (Arabic, Czech, French, German, Spanish, and Russian), our model outperforms various baselines (Kneser-Ney, word- level/morpheme-level LSTM), again with fewer parame- ters.
We have released all the code for the models described in this paper.1
Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
1https://github.com/yoonkim/lstm-char-cnn
Model The architecture of our model, shown in Figure 1, is straight- forward. Whereas a conventional NLM takes word embed- dings as inputs, our model instead takes the output from a single-layer character-level convolutional neural network with max-over-time pooling. | 1508.06615#5 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 6 | # 2 Related work
Existing work on instruction following can be semantic roughly divided into two families: parsers and linear policy estimators.
parsers Parser-based Semantic approaches (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Kim and Mooney, 2013; Tellex et al., 2011) map from text language These take familiar representing commands. structured prediction models for semantic parsing (Zettlemoyer and Collins, 2005; Wong and Mooney, 2006), and train them with task-provided Instead of attempting to match the supervision. structure of a manually-annotated semantic parse, semantic parsers for instruction following are
Much of contemporary work in this family is evaluated on the maze navigation task introduced by MacMahon et al. (2006). Dukes (2013) also in- troduced a âblocks worldâ task for situated parsing of spatial robot commands. | 1508.06491#6 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 6 | For notation, we denote vectors with bold lower-case (e.g. xt, b), matrices with bold upper-case (e.g. W, Uo), scalars with italic lower-case (e.g. x, b), and sets with cursive upper- case (e.g. V, C) letters. For notational convenience we as- sume that words and characters have already been converted into indices.
Recurrent Neural Network A recurrent neural network (RNN) is a type of neural net- work architecture particularly suited for modeling sequen- tial phenomena. At each time step t, an RNN takes the input vector xt â Rn and the hidden state vector htâ1 â Rm and produces the next hidden state ht by applying the following recursive operation: | 1508.06615#6 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 7 | Linear policy estimators An alternative fam- ily of approaches is based on learning a pol- icy over primitive actions directly (Branavan et al., 2009; Vogel and Jurafsky, 2010).1 Policy- based approaches instantiate a Markov decision process representing the action domain, and ap- ply standard supervised or reinforcement-learning approaches to learn a function for greedily select- ing among actions. In linear policy approximators, natural language instructions are incorporated di1This is distinct from semantic parsers in which greedy inference happens to have an interpretation as a policy (Vla- chos and Clark, 2014).
rectly into state observations, and reading order becomes part of the action selection process.
Almost all existing policy-learning approaches make use of an unstructured parameterization, with a single (ï¬at) feature vector representing all text and observations. Such approaches are thus restricted to problems that are simple enough (and have small enough action spaces) to be effectively characterized in this fashion. While there is a great deal of ï¬exibility in the choice of feature func- tion (which is free to inspect the current and fu- ture state of the environment, the whole instruc- tion sequence, etc.), standard linear policy estima- tors have no way to model compositionality in lan- guage or actions. | 1508.06491#7 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 7 | ht = f (Wxt + Uhtâ1 + b) (1) Here W â RmÃn, U â RmÃm, b â Rm are parameters of an afï¬ne transformation and f is an element-wise nonlin- earity. In theory the RNN can summarize all historical in- formation up to time t with the hidden state ht. In practice however, learning long-range dependencies with a vanilla RNN is difï¬cult due to vanishing/exploding gradients (Ben- gio, Simard, and Frasconi 1994), which occurs as a result of the Jacobianâs multiplicativity with respect to time.
(Hochreiter and Schmidhuber 1997) addresses the problem of learning long range dependencies by augmenting the RNN with a memory cell vector ct â Rn at each time step. Concretely, one step of an LSTM takes as input xt, htâ1, ctâ1 and produces ht, ct via the following intermediate calculations: | 1508.06615#7 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 8 | Agents in this family have been evaluated on a variety of tasks, including map reading (Anderson et al., 1991) and gameplay (Branavan et al., 2009).
Though both families address the same class of instruction-following problems, they have been applied to a totally disjoint set of tasks. It should be emphasized that there is nothing inherent to policy learning that prevents the use of composi- tional structure, and nothing inherent to general compositional models that prevents more compli- cated dependence on environment state. Indeed, previous work (Branavan et al., 2011; Narasimhan et al., 2015) uses aspects of both to solve a differ- ent class of gameplay problems. In some sense, our goal in this paper is simply to combine the strengths of semantic parsers and linear policy es- timators for fully general instruction following. As we shall see, however, this requires changes to many aspects of representation, learning and in- ference.
# 3 Representations
We wish to train a model capable of following commands in a simulated environment. We do so by presenting the model with a sequence of train- ing pairs (x, y), where each x is a sequence of nat- ural language instructions (x1, x2, . . . , xm), e.g.:
(Go down the yellow hall., Turn left., | 1508.06491#8 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06491 | 9 | (Go down the yellow hall., Turn left.,
and each y is a demonstrated action sequence (y1, y2, . . . , yn), e.g.:
(rotate(90), move(2), . . . )
Given a start state, y can equivalently be char- acterized by a sequence of (state, action, state)
(a) Text Go down the yellow hall b) Syntax EN ©) Sy va rN * go down the yellow hall â de > he Hl an _ : or | Yellow (c) Alignment aoe ee 8~ 6) (d) Perception Gg) â28 move(2) ES (e) Environment
Figure 2: Structure-to-structure alignment connecting a sin- gle sentence (via its syntactic analysis) to the environment state (via its grounding graph). The connecting alignments take the place of a traditional semantic parse and allow ï¬exi- ble, feature-driven linking between lexical primitives and per- ceptual factors. | 1508.06491#9 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 9 | Here o(-) and tanh(-) are the element-wise sigmoid and hy- perbolic tangent functions, © is the element-wise multipli- cation operator, and i;, f;, o, are referred to as input, for- get, and output gates. Att = 1, ho and co are initialized to zero vectors. Parameters of the LSTM are W!, U/,b/â for j⬠{i f,0,9}Memory cells in the LSTM are additive with respect to time, alleviating the gradient vanishing problem. Gradient exploding is still an issue, though in practice simple opti- mization strategies (such as gradient clipping) work well. LSTMs have been shown to outperform vanilla RNNs on many tasks, including on language modeling (Sundermeyer, Schluter, and Ney 2012). It is easy to extend the RNN/LSTM to two (or more) layers by having another network whose
absurdity is: recognized 7 betweennext word and prediction Softmax output to obtain distribution over next word Long short-term memory network Highway network Max-over-time poolinglayer Convolution layer with multiple filters of different widths Concatenation of character embeddings moment the is recognized | 1508.06615#9 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 10 | triples resulting from execution of the environ- ment model. An example instruction is shown in Figure 2a. An example action, situated in the en- vironment where it occurs, is shown in Figure 2e. Our model performs compositional interpreta- tion of instructions by leveraging existing struc- ture inherent in both text and actions. Thus we interpret xi and yj not as raw strings and primitive actions, but rather as structured objects.
Linguistic structure We assume access to a pre- trained parser, and in particular that each of the instructions xi is represented by a tree-structured dependency parse. An example is shown in Fig- ure 2b.
Action structure By analogy to the represen- tation of instructions as parse trees, we assume that each (state, action, state) triple (provided by the environment model) can be characterized by a grounding graph.2 The structure and content of
2We note that the instruction following model of Tellex et al. (2011) features a similarly named âGeneralized Groundthis representation is task-speciï¬c. An example grounding graph for the maze navigation task is shown in Figure 2d. The example contains a node corresponding to the primitive action move(2) (in the upper left), and several nodes correspond- ing to locations in the environment that are visible after the action is performed. | 1508.06491#10 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.