doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1508.06615 | 10 | Figure 1: Architecture of our language model applied to an exam- ple sentence. Best viewed in color. Here the model takes absurdity as the current input and combines it with the history (as represented by the hidden state) to predict the next word, is. First layer performs a lookup of character embeddings (of dimension four) and stacks them to form the matrix Ck. Then convolution operations are ap- plied between Ck and multiple ï¬lter matrices. Note that in the above example we have twelve ï¬ltersâthree ï¬lters of width two (blue), four ï¬lters of width three (yellow), and ï¬ve ï¬lters of width four (red). A max-over-time pooling operation is applied to obtain a ï¬xed-dimensional representation of the word, which is given to the highway network. The highway networkâs output is used as the input to a multi-layer LSTM. Finally, an afï¬ne transformation fol- lowed by a softmax is applied over the hidden representation of the LSTM to obtain the distribution over the next word. Cross en- tropy loss between the (predicted) distribution over next word and the actual next word is | 1508.06615#10 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 11 | Each node in the graph (and, though not de- picted, each edge) is decorated with a list of fea- tures. These features might be simple indica- tors (e.g. whether the primitive action performed was move or rotate), real values (the distance traveled) or even string-valued (English-language names of visible landmarks, if available in the environment description). Formally, a grounding graph consists of a tuple (V, E, L, fV , fE), with
â V a set of vertices
â E â V Ã V a set of (directed) edges
â L a space of labels (numbers, strings, etc.) â fV : V â 2L a vertex feature function â fE : E â 2L an edge feature function
In this paper we have tried to remain agnostic to details of graph construction. Our goal with the grounding graph framework is simply to accom- modate a wider range of modeling decisions than allowed by existing formalisms. Graphs might be constructed directly, given access to a struc- tured virtual environment (as in all experiments in this paper), or alternatively from outputs of a perceptual system. For our experiments, we have remained as close as possible to task representa- tions described in the existing literature. Details for each task can be found in the accompanying software package. | 1508.06491#11 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06491 | 12 | Graph-based representations are extremely common in formal semantics (Jones et al., 2012; Reddy et al., 2014), and the version presented here corresponds to a simple generalization of famil- iar formal methods. Indeed, if L is the set of all atomic entities and relations, fV returns a unique label for every v â V , and fE always returns a vector with one active feature, we recover the existentially-quantiï¬ed portion of ï¬rst order logic exactly, and in this form can implement large parts of classical neo-Davidsonian semantics (Parsons, 1990) using grounding graphs.
ing Graphâ (G3) formalism. A G3 links the syntax of the in- put command to the action ultimately executed, and is thus more analogous to our structured alignment variable (Fig- ure 2c) than our perceptual representation.
Crucially, with an appropriate choice of L this formalism also makes it possible to go beyond set- theoretic relations, and incorporate string-valued features (like names of entities and landmarks) and real-valued features (like colors and positions) as well. | 1508.06491#12 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 12 | input at t is ht (from the ï¬rst network). Indeed, having mul- tiple layers is often crucial for obtaining competitive perfor- mance on various tasks (Pascanu et al. 2013).
Recurrent Neural Network Language Model Let V be the ï¬xed size vocabulary of words. A language model speciï¬es a distribution over wt+1 (whose support is V) given the historical sequence w1:t = [w1, . . . , wt]. A re- current neural network language model (RNN-LM) does this
by applying an afï¬ne transformation to the hidden layer fol- lowed by a softmax:
exp(hy - p! +9â) Vyrev exp(hy, - p!â + q?") (3) Pr(wey1 = J|wie) | 1508.06615#12 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 13 | Lexical semantics We must eventually combine features provided by parse trees with features pro- vided by the environment. Examples here might include simple conjunctions (word=yellow â§ rgb=(0.5, 0.5, 0.0)) or more compli- cated computations like edit distance between landmark names and lexical items. Features of the latter kind make it possible to behave correctly in environments containing novel strings or other features unseen during training.
the syntaxâsemantics inter- face has been troublesome for some logic-based approaches: while past work has used related machinery for selecting lexicon entries (Berant and Liang, 2014) or for rewriting logical forms (Kwiatkowski et al., 2013), the relationship be- tween text and the environment has ultimately been mediated by a discrete (and indeed ï¬nite) in- ventory of predicates. Several recent papers have investigated simple grounded models with real- valued output spaces (Andreas and Klein, 2014; McMahan and Stone, 2015), but we are unaware of any fully compositional system in recent lit- erature that can incorporate observations of these kinds.
Formally, we assume access to a joining feature function @ : (2£ x 2£) > R¢. As with grounding graphs, our goal is to make the general framework as flexible as possible, and for individual exper- iments have chosen ¢ to emulate modeling deci- sions from previous work. | 1508.06491#13 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 13 | where pj is the j-th column of P â RmÃ|V| (also referred to as the output embedding),2 and qj is a bias term. Similarly, for a conventional RNN-LM which usually takes words as inputs, if wt = k, then the input to the RNN-LM at t is the input embedding xk, the k-th column of the embedding matrix X â RnÃ|V|. Our model simply replaces the input embeddings X with the output from a character-level con- volutional neural network, to be described below.
If we denote w1:T = [w1, · · · , wT ] to be the sequence of words in the training corpus, training involves minimizing the negative log-likelihood (N LL) of the sequence
T NLL=-â > log Pr(w;|w1.1-1) (4) t=1
which is typically done by truncated backpropagation through time (Werbos 1990; Graves 2013). | 1508.06615#13 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 14 | # 4 Model
As noted in the introduction, we approach instruc- tion following as a sequence prediction problem. Thus we must place a distribution over sequences of actions conditioned on instructions. We decom- pose the problem into two components, describing interlocking models of âpath structureâ and âac- tion structureâ. Path structure captures how se- quences of instructions give rise to sequences of actions, while action structure captures the com- positional relationship between individual utter- ances and the actions they specify.
Text Go down the yellow hall. Turn left. tum â left Alignments Plans.
Figure 3: Our model is a conditional random ï¬eld that de- scribes distributions over state-action sequences conditioned on input text. Each variableâs domain is a structured value. Sentences align to a subset of the stateâaction sequences, with the rest of the states ï¬lled in by pragmatic (planning) implication. State-to-state structure represents planning con- straints (environment model) while state-to-text structure rep- resents compositional alignment. All potentials are log-linear and feature-driven.
# Path structure: aligning utterances to actions | 1508.06491#14 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 14 | which is typically done by truncated backpropagation through time (Werbos 1990; Graves 2013).
Character-level Convolutional Neural Network In our model, the input at time t is an output from a character-level convolutional neural network (CharCNN), which we describe in this section. CNNs (LeCun et al. 1989) have achieved state-of-the-art results on computer vi- sion (Krizhevsky, Sutskever, and Hinton 2012) and have also been shown to be effective for various NLP tasks (Collobert et al. 2011). Architectures employed for NLP applications differ in that they typically involve temporal rather than spa- tial convolutions.
Let C be the vocabulary of characters, d be the dimen- sionality of character embeddings,3 and Q â RdÃ|C| be the matrix character embeddings. Suppose that word k â V is made up of a sequence of characters [c1, . . . , cl], where l is the length of word k. Then the character-level representation of k is given by the matrix Ck â RdÃl, where the j-th col- umn corresponds to the character embedding for cj (i.e. the cj-th column of Q).4 | 1508.06615#14 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 15 | # Path structure: aligning utterances to actions
The high-level path structure in the model is de- picted in Figure 3. Our goal here is to permit both under- and over-speciï¬cation of plans, and to ex- pose a planning framework which allows plans to be computed with lookahead (i.e. non-greedily).
These goals are achieved by introducing a se- quence of latent alignments between instructions and actions. Consider the multi-step example in Figure 1b. If the ï¬rst instruction go down the yel- low hall were interpreted immediately, we would have a presupposition failureâthe agent is facing a wall, and cannot move forward at all. Thus an implicit rotate action, unspeciï¬ed by text, must be performed before any explicit instructions can be followed.
To model this, we take the probability of a (text, plan, alignment) triple to be log-proportional to the sum of two quantities:
1. a path-only score (n; 4) + 22;
j Ï(yj; θ)
2. a path-and-text score, itself the sum of all pair scores Ï(xi, yj; θ) licensed by the alignment | 1508.06491#15 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 15 | We apply a narrow convolution between Ck and a ï¬lter (or kernel) H â RdÃw of width w, after which we add a bias and apply a nonlinearity to obtain a feature map f k â Rlâw+1. Speciï¬cally, the i-th element of f k is given by:
f* [i] = tanh((C*[x,i:i+w-1],H)+b) (6)
2In our work, predictions are at the word-level, and hence we still utilize word embeddings in the output layer.
3Given that |C| is usually small, some authors work with one- hot representations of characters. However we found that using lower dimensional representations of characters (i.e. d < |C|) per- formed slightly better.
4Two technical details warrant mention here: (1) we append start-of-word and end-of-word characters to each word to better represent preï¬xes and sufï¬xes and hence Ck actually has l + 2 columns; (2) for batch processing, we zero-pad Ck so that the num- ber of columns is constant (equal to the max word length) for all words in V. | 1508.06615#15 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 16 | 2. a path-and-text score, itself the sum of all pair scores Ï(xi, yj; θ) licensed by the alignment
(1) captures our desire for pragmatic constraints on interpretation, and provides a means of encod- ing the inherent plausibility of paths. We take Ï(n; θ) and Ï(y; θ) to be linear functions of θ. (2) provides context-dependent interpretation of text by means of the structured scoring function Ï(x, y; θ), described in the next section.
Formally, we associate with each instruction xi a sequence-to-sequence alignment variable ai â
1 . . . n (recalling that n is the number of actions). Then we have3
Â¥(Y5) 1 p(y,alx; 0) x exp {win + Jj +503 1a; =i i=1 j=l ro)} qd) | 1508.06491#16 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 16 | where C* [x, i : i+-wâ1] is the i-to-(i+wâ1)-th column of C* and (A, B) = Tr(AB*) is the Frobenius inner product. Finally, we take the max-over-time
yk = max i f k[i] (6)
as the feature corresponding to the ï¬lter H (when applied to word k). The idea is to capture the most important featureâ the one with the highest valueâfor a given ï¬lter. A ï¬lter is essentially picking out a character n-gram, where the size of the n-gram corresponds to the ï¬lter width.
We have described the process by which one feature is obtained from one ï¬lter matrix. Our CharCNN uses multiple ï¬lters of varying widths to obtain the feature vector for k. So if we have a total of h ï¬lters H1, . . . , Hh, then yk = [yk h] is the input representation of k. For many NLP applications h is typically chosen to be in [100, 1000]. | 1508.06615#16 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 17 | We additionally place a monotonicity constraint on the alignment variables. This model is globally normalized, and for a ï¬xed alignment is equiva- lent to a linear-chain CRF. In this sense it is analo- gous to IBM Model I (Brown et al., 1993), with the structured potentials Ï(xi, yj) taking the place of lexical translation probabilities. While alignment models from machine translation have previously been used to align words to fragments of semantic parses (Wong and Mooney, 2006; Pourdamghani et al., 2014), we are unaware of such models be- ing used to align entire instruction sequences to demonstrations.
# Action structure: aligning words to percepts | 1508.06491#17 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 17 | Highway Network We could simply replace xk (the word embedding) with yk at each t in the RNN-LM, and as we show later, this simple model performs well on its own (Table 7). One could also have a multilayer perceptron (MLP) over yk to model in- teractions between the character n-grams picked up by the ï¬lters, but we found that this resulted in worse performance. Instead we obtained improvements by running yk through a highway network, recently proposed by Srivastava et al. (2015). Whereas one layer of an MLP applies an afï¬ne trans- formation followed by a nonlinearity to obtain a new set of features,
z = g(Wy + b) (7)
one layer of a highway network does the following:
z=tOg(Wuy+by)+(1-t)oy (8)
where g is a nonlinearity, t = Ï(WT y + bT ) is called the transform gate, and (1ât) is called the carry gate. Similar to the memory cells in LSTM networks, highway layers allow for training of deep networks by adaptively carrying some dimensions of the input directly to the output.5 By construc- tion the dimensions of y and z have to match, and hence WT and WH are square matrices. | 1508.06615#17 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 18 | # Action structure: aligning words to percepts
Intuitively, this scoring function Ï(x, y) should capture how well a given utterance describes an action. If neither the utterances nor the actions had structure (i.e. both could be represented with sim- ple bags of features), we would recover something analogous to the conventional policy-learning ap- proach. As structure is essential for some of our tasks, Ï(x, y) must instead ï¬ll the role of a seman- tic parser in a conventional compositional model. Our choice of Ï(x, y) is driven by the following fundamental assumptions: Syntactic relations ap- proximately represent semantic relations. Syntac- tic proximity implies relational proximity. In this view, there is an additional hidden structure-to- structure alignment between the grounding graph and the parsed text describing it. 4 Words line up with nodes, and dependencies line up with rela- tions. Visualizations are shown in Figure 2c and the zoomed-in portion of Figure 3.
As with the top-level alignment variables, this approach can viewed as a simple relaxation of a familiar model. CCG-based parsers assume that
3Here and in the remainder of this paper, we suppress the dependence of the various potentials on θ in the interest of readability. 4It | 1508.06491#18 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 18 | Experimental Setup As is standard in language modeling, we use perplexity (P P L) to evaluate the performance of our models. Perplex- ity of a model over a sequence [w1, . . . , wT ] is given by
PPL= exp(â*) (9)
where N LL is calculated over the test set. We test the model on corpora of varying languages and sizes (statistics avail- able in Table 1).
We conduct hyperparameter search, model introspection, and ablation studies on the English Penn Treebank (PTB) (Marcus, Santorini, and Marcinkiewicz 1993), utilizing the
5Srivastava et al. (2015) recommend initializing bT to a neg- ative value, in order to militate the initial behavior towards carry. We initialized bT to a small interval around â2. | 1508.06615#18 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 19 | 3Here and in the remainder of this paper, we suppress the dependence of the various potentials on θ in the interest of readability. 4It
is formally possible to regard the sequence-to- sequence and structure-to-structure alignments as a single (structured) random variable. However, the two kinds of alignments are treated differently for purposes of inference, so it is useful to maintain a notational distinction.
syntactic type strictly determines semantic type, and that each lexical item is associated with a small set of functional forms. Here we simply allow all words to license all predicates, multi- ple words to specify the same predicate, and some edges to be skipped. We instead rely on a scoring function to impose soft versions of the hard con- straints typically provided by a grammar. Related models have previously been used for question an- swering (Reddy et al., 2014; Pasupat and Liang, 2015). | 1508.06491#19 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 19 | DATA-S DATA-L |V| |C| T |V| |C| English (EN) Czech (CS) German (DE) Spanish (ES) French (FR) Russian (RU) Arabic (AR) 10 k 46 k 37 k 27 k 25 k 62 k 86 k 51 101 74 72 76 62 132 1 m 60 k 1 m 206 k 1 m 339 k 1 m 152 k 1 m 137 k 1 m 497 k 4 m â 197 195 260 222 225 111 â T 20 m 17 m 51 m 56 m 57 m 25 m â
Table 1: Corpus statistics. |V| = word vocabulary size; |C| = char- acter vocabulary size; T = number of tokens in training set. The small English data is from the Penn Treebank and the Arabic data is from the News-Commentary corpus. The rest are from the 2013 ACL Workshop on Machine Translation. |C| is large because of (rarely occurring) special characters.
standard training (0-20), validation (21-22), and test (23-24) splits along with pre-processing by Mikolov et al. (2010). With approximately 1m tokens and |V| = 10k, this version has been extensively used by the language modeling com- munity and is publicly available.6 | 1508.06615#19 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 20 | For the moment let us introduce variables b to denote these structure-to-structure alignments. (As will be seen in the following section, it is straightforward to marginalize over all choices of b. Thus the structure-to-structure alignments are never explicitly instantiated during inference, and do not appear in the final form of w(x, y).) For a fixed alignment, we define w(x, y, b) according to a recurrence relation. Let xâ be the ith word of the sentence, and let Â¥? be the jth node in the ac- tion graph (under some topological ordering). Let c(i) and c(j) give the indices of the dependents of xâ and children of yâ respectively. Finally, let «* and yâ! denote the associated dependency type or relation. Define a âdescendantâ function: d(i,j) = {(k,)) : k ⬠c(i), Le c(j), (k,l) ⬠b}
Then,
O(c yD) = ex {0 0(0'.y!) + de (k,l) â¬d(a,y) [eT o(x'*,y") va a0) | 1508.06491#20 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 20 | With the optimal hyperparameters tuned on PTB, we ap- ply the model to various morphologically rich languages: Czech, German, French, Spanish, Russian, and Arabic. Non- Arabic data comes from the 2013 ACL Workshop on Ma- chine Translation,7 and we use the same train/validation/test splits as in Botha and Blunsom (2014). While the raw data are publicly available, we obtained the preprocessed ver- sions from the authors,8 whose morphological NLM serves as a baseline for our work. We train on both the small datasets (DATA-S) with 1m tokens per language, and the large datasets (DATA-L) including the large English data which has a much bigger |V| than the PTB. Arabic data comes from the News-Commentary corpus,9 and we per- form our own preprocessing and train/validation/test splits. In these datasets only singleton words were replaced with <unk> and hence we effectively use the full vocabulary. It is worth noting that the character model can utilize surface forms of OOV tokens (which were replaced with <unk>), but we do not do this and stick to the preprocessed versions (de- spite disadvantaging the character models) for exact com- parison against prior work.
# Optimization | 1508.06615#20 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 21 | This is just an unnormalized synchronous deriva- tion between x and yâat any aligned (node, word) pair, the score for the entire derivation is the score produced by combining that word and node, times the scores at all the aligned descendants. Observe that as long as there are no cycles in the depen- dency parse, it is perfectly acceptable for the rela- tion graph to contain cycles and even self-loopsâ the recurrence still bottoms out appropriately.
# 5 Learning and inference
Given a sequence of training pairs (x, y), we wish to ï¬nd a parameter setting that maximizes p(y|x; θ). If there were no latent alignments a or b, this would simply involve minimization of a convex objective. The presence of latent vari- ables complicates things. Ideally, we would like
Algorithm 1 Computing structure-to-structure alignments | 1508.06491#21 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 21 | # Optimization
The models are trained by truncated backpropagation through time (Werbos 1990; Graves 2013). We backprop- agate for 35 time steps using stochastic gradient descent where the learning rate is initially set to 1.0 and halved if the perplexity does not decrease by more than 1.0 on the validation set after an epoch. On DATA-S we use a batch size of 20 and on DATA-L we use a batch size of 100 (for
# 6http://www.ï¬t.vutbr.cz/â¼imikolov/rnnlm/ 7http://www.statmt.org/wmt13/translation-task.html 8http://bothameister.github.io/ 9http://opus.lingï¬l.uu.se/News-Commentary.php
Small Large CNN d w [1, 2, 3, 4, 5, 6] [25 · w] h f tanh 15 15 [1, 2, 3, 4, 5, 6, 7] [min{200, 50 · w}] tanh Highway l g 1 ReLU 2 ReLU LSTM l m 300 2 2 650 | 1508.06615#21 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 22 | Algorithm 1 Computing structure-to-structure alignments
xâ are words in reverse topological order y! are grounding graph nodes (root last) chart is anm x n array for i = 1 to |x| do for j = 1 to |y| do score + exp {0' (x,y) } for (k,l) ⬠d(i,7j) do 8 â Vee) [ exp {01 d(aâ¢Â®, yi") } - chart(k, i] score < score § end for chart|i, j] <â score end for end for return chart[n, m|
to sum over the latent variables, but that sum is in- tractable. Instead we make a series of variational approximations: ï¬rst we replace the sum with a maximization, then perform iterated conditional modes, alternating between maximization of the conditional probability of a and θ. We begin by initializing θ randomly. | 1508.06491#22 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 22 | Table 2: Architecture of the small and large models. d = dimensionality of character embeddings; w = ï¬lter widths; h = number of ï¬lter matrices, as a function of ï¬lter width (so the large model has ï¬lters of width [1, 2, 3, 4, 5, 6, 7] of size [50, 100, 150, 200, 200, 200, 200] for a total of 1100 ï¬lters); f, g = nonlinearity functions; l = number of layers; m = number of hidden units.
greater efï¬ciency). Gradients are averaged over each batch. We train for 25 epochs on non-Arabic and 30 epochs on Ara- bic data (which was sufï¬cient for convergence), picking the best performing model on the validation set. Parameters of the model are randomly initialized over a uniform distribu- tion with support [â0.05, 0.05]. | 1508.06615#22 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 23 | As noted in the preceding section, the vari- able b does not appear in these equations. Con- ditioned on a, the sum over structure-to-structure W(x,y) = 2, v(x, y,6) can be performed ex- actly using a simple dynamic program which runs in time O(|2||y|) (assuming out-degree bounded by a constant, and with || and |y| the number of words and graph nodes respectively). This is[Al-] gorithm |
In our experiments, @ is optimized using L- BFGS (Liu and Nocedal, 1989). Calculation of the gradient with respect to 6 requires computa- ion of a normalizing constant involving the sum over p(x, yâ,a) for all yâ. While in principle the normalizing constant can be computed using the orward algorithm, in practice the state spaces un- der consideration are so large that even this is in- tractable. Thus we make an additional approxima- ion, constructing a set Y of alternative actions and aking
exp {oy Lla;=s}v(wi.4:) Dyer exp {(VG+DN Uai=Iv(eiD)} p(y.alx) ~ D> j=l | 1508.06491#23 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 23 | For regularization we use dropout (Hinton et al. 2012) with probability 0.5 on the LSTM input-to-hidden layers (except on the initial Highway to LSTM layer) and the hidden-to-output softmax layer. We further constrain the norm of the gradients to be below 5, so that if the L2 norm of the gradient exceeds 5 then we renormalize it to have || · || = 5 before updating. The gradient norm constraint was crucial in training the model. These choices were largely guided by previous work of Zaremba et al. (2014) on word- level language modeling with LSTMs.
Finally, in order to speed up training on DATA-L we em- ploy a hierarchical softmax (Morin and Bengio 2005)âa common strategy for training language models with very large |V|âinstead of the usual softmax. We pick the number of clusters c = [\/|V|] and randomly split V into mutually exclusive and collectively exhaustive subsets V),...,V. of (approximately) equal size.!° Then Pr(wi41 = j|wi.t) be- comes, | 1508.06615#23 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 24 | ËY is constructed by sampling alternative actions from the environment model. Meanwhile, maxi- mization of a can be performed exactly using the Viterbi algorithm, without computation of normal- izers.
Inference at test time involves a slightly differ- ent pair of optimization problems. We again per- form iterated conditional modes, here on the align- ments a and the unknown output path y. Max- imization of a is accomplished with the Viterbi algorithm, exactly as before; maximization of y also uses the Viterbi algorithm, or a beam search when this is computationally infeasible. If bounds on path length are known, it is straightforward to adapt these dynamic programs to efï¬ciently con- sider paths of all lengths.
# 6 Evaluation | 1508.06491#24 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 24 | . exp(h,-sâ +tâ Pr(wisa = j|wie) x P(t ) â1 exp(hy, - sâ + #â") exp(hy - p} + gf) Vive, exp(hy - pr +q@ ) (10)
where r is the cluster index such that j â Vr. The ï¬rst term is simply the probability of picking cluster r, and the second
10While Brown clustering/frequency-based clustering is com- monly used in the literature (e.g. Botha and Blunsom (2014) use Brown clusering), we used random clusters as our implementation enjoys the best speed-up when the number of words in each clus- ter is approximately equal. We found random clustering to work surprisingly well. | 1508.06615#24 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 25 | # 6 Evaluation
As one of the main advantages of this approach is its generality, we evaluate on several different benchmark tasks for instruction following. These exhibit great diversity in both environment struc- ture and language use. We compare our full system to recent state-of-the-art approaches to each task. In the introduction, we highlighted two core aspects of our approach to semantics: compositionality (by way of grounding graphs and structure-to-structure alignments) and plan- ning (by way of inference with lookahead and sequence-to-sequence alignments). To evaluate these, we additionally present a pair of ablation ex- periments: no grounding graphs (an agent with an unstructured representation of environment state), and no planning (a reï¬ex agent with no looka- head). | 1508.06491#25 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 25 | P P L Size LSTM-Word-Small LSTM-Char-Small LSTM-Word-Large LSTM-Char-Large 97.6 92.3 85.4 78.9 5 m 5 m 20 m 19 m KN-5 (Mikolov et al. 2012) RNNâ (Mikolov et al. 2012) RNN-LDAâ (Mikolov et al. 2012) genCNNâ (Wang et al. 2015) FOFE-FNNLMâ (Zhang et al. 2015) Deep RNN (Pascanu et al. 2013) Sum-Prod Netâ (Cheng et al. 2014) LSTM-1â (Zaremba et al. 2014) LSTM-2â (Zaremba et al. 2014) 141.2 124.7 113.7 116.4 108.0 107.5 100.0 82.7 78.4 2 m 6 m 7 m 8 m 6 m 6 m 5 m 20 m 52 m | 1508.06615#25 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 26 | Map reading Our ï¬rst application is the map navigation task established by Vogel and Jurafsky (2010), based on data collected for a psychological experiment by Anderson et al. (1991) (Figure 1a). Each training datum consists of a map with a des- ignated starting position, and a collection of land- marks, each labeled with a spatial coordinate and a string name. Names are not always unique, and landmarks in the test set are never observed dur- ing training. This map is accompanied by a set of instructions specifying a path from the start- ing position to some (unlabeled) destination point. These instruction sets are informal and redundant, involving as many as a hundred utterances. They are transcribed from spoken text, so grammatical errors, disï¬uencies, etc. are common. This is a
P R F1 Vogel and Jurafsky (2010) Andreas and Klein (2014) 0.46 0.43 0.51 0.51 0.48 0.45 Model [no planning] Model [no grounding graphs] Model [full] 0.44 0.52 0.51 0.46 0.52 0.60 0.45 0.52 0.55 | 1508.06491#26 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 26 | Table 3: Performance of our model versus other neural language models on the English Penn Treebank test set. P P L refers to per- plexity (lower is better) and size refers to the approximate number of parameters in the model. KN-5 is a Kneser-Ney 5-gram language model which serves as a non-neural baseline. â For these models the authors did not explicitly state the number of parameters, and hence sizes shown here are estimates based on our understanding of their papers or private correspondence with the respective authors.
term is the probability of picking word j given that cluster r is picked. We found that hierarchical softmax was not nec- essary for models trained on DATA-S.
# Results
English Penn Treebank We train two versions of our model to assess the trade-off between performance and size. Architecture of the small (LSTM-Char-Small) and large (LSTM-Char-Large) models is summarized in Table 2. As another baseline, we also train two comparable LSTM models that use word em- beddings only (LSTM-Word-Small, LSTM-Word-Large). LSTM-Word-Small uses 200 hidden units and LSTM-Word- Large uses 650 hidden units. Word embedding sizes are also 200 and 650 respectively. These were chosen to keep the number of parameters similar to the corresponding character-level model. | 1508.06615#26 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 27 | Table 1: Evaluation results for the map-reading task. P is pre- cision, R is recall and F1 is F-measure. Scores are calculated with respect to transitions between landmarks appearing in the reference path (for details see Vogel and Jurafsky (2010)). We use the same train / test split. Some variant of our model achieves the best published results on all three metrics.
Feature Weight word=top â§ side=North word=top â§ side=South word=top â§ side=East 1.31 0.61 â0.93 dist=0 dist=1 dist=4 4.51 2.78 1.54
Table 2: Learned feature values. The model learns that the word top often instructs the navigator to position itself above a landmark, occasionally to position itself below a landmark, but rarely to the side. The bottom portion of the table shows learned text-independent constraints: given a choice, near destinations are preferred to far ones (so shorter paths are pre- ferred overall).
prime example of a domain that does not lend it- self to logical representationâgrammars may be too rigid, and previously-unseen landmarks and real-valued positions are handled more easily with feature machinery than predicate logic. | 1508.06491#27 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 27 | As can be seen from Table 3, our large model is on par with the existing state-of-the-art (Zaremba et al. 2014), despite having approximately 60% fewer parameters. Our small model signiï¬cantly outperforms other NLMs of sim- ilar size, even though it is penalized by the fact that the dataset already has OOV words replaced with <unk> (other models are purely word-level models). While lower perplex- ities have been reported with model ensembles (Mikolov and Zweig 2012), we do not include them here as they are not comparable to the current work.
Other Languages The modelâs performance on the English PTB is informative to the extent that it facilitates comparison against the large body of existing work. However, English is relatively simple
DATA-S CS DE ES FR RU Botha KN-4 MLBL 545 465 366 296 241 200 274 225 396 304 Small Word Morph Char 503 414 401 305 278 260 212 197 182 229 216 189 352 290 278 Large Word Morph Char 493 398 371 286 263 239 200 177 165 222 196 184 357 271 261 AR 323 â 216 230 196 172 148 148 | 1508.06615#27 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 28 | The map task was previously studied by Vo- gel and Jurafsky (2010), who implemented SARSA with a simple set of features. By combining these features with our alignment model and search pro- cedure, we achieve state-of-the-art results on this task by a substantial margin (Table 1).
Some learned feature values are shown in Ta- ble 2. The model correctly infers cardinal direc- tions (the example shows the preferred side of a destination landmark modiï¬ed by the word top). Like Vogel et al., we see support for both allocen- tric references (you are on top of the hill) and ego- centric references (the hill is on top of you). We can also see pragmatics at work: the model learns useful text-independent constraintsâin this case, that near destinations should be preferred to far ones.
Maze navigation The next application we con- sider is the maze navigation task of MacMahon et al. (2006) (Figure 1b). Here, a virtual agent is sitSuccess (%) Kim and Mooney (2012) Chen (2012) 57.2 57.3 Model [no planning] Model [no grounding graphs] Model [full] 58.9 51.7 59.6 Kim and Mooney (2013) [reranked] Artzi et al. (2014) [semi-supervised] 62.8 65.3 | 1508.06491#28 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 28 | Table 4: Test set perplexities for DATA-S. First two rows are from Botha (2014) (except on Arabic where we trained our own KN-4 model) while the last six are from this paper. KN-4 is a Kneser- Ney 4-gram language model, and MLBL is the best performing morphological logbilinear model from Botha (2014). Small/Large refer to model size (see Table 2), and Word/Morph/Char are models with words/morphemes/characters as inputs respectively.
from a morphological standpoint, and thus our next set of results (and arguably the main contribution of this paper) is focused on languages with richer morphology (Table 4, Table 5). | 1508.06615#28 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 29 | Table 3: Evaluation results for the maze navigation task. âSuccessâ shows the percentage of actions resulting in a cor- rect position and orientation after observing a single instruc- tion. We use the leave-one-map-out evaluation employed by previous work.5 All systems are trained on full action se- quences. Our model outperforms several task-speciï¬c base- lines, as well as a baseline with path structure but no action structure.
uated in a maze (whose hallways are distinguished with various wallpapers, carpets, and the presence of a small set of standard objects), and again given instructions for getting from one point to another. This task has been the subject of focused attention in semantic parsing for several years, resulting in a variety of sophisticated approaches.
Despite superï¬cial similarity to the previous navigation task, the language and plans required for this task are quite different. The proportion of instructions to actions is much higher (so redun- dancy much lower), and the interpretation of lan- guage is highly compositional. | 1508.06491#29 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 29 | from a morphological standpoint, and thus our next set of results (and arguably the main contribution of this paper) is focused on languages with richer morphology (Table 4, Table 5).
We compare our results against the morphological log- bilinear (MLBL) model from Botha and Blunsom (2014), whose model also takes into account subword information through morpheme embeddings that are summed at the input and output layers. As comparison against the MLBL mod- els is confounded by our use of LSTMsâwidely known to outperform their feed-forward/log-bilinear cousinsâwe also train an LSTM version of the morphological NLM, where the input representation of a word given to the LSTM is a summation of the wordâs morpheme embeddings. Con- cretely, suppose that M is the set of morphemes in a lan- guage, M â RnÃ|M| is the matrix of morpheme embed- dings, and mj is the j-th column of M (i.e. a morpheme embedding). Given the input word k, we feed the following representation to the LSTM:
xk + mj jâMk (11) | 1508.06615#29 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 30 | As can be seen in Table 3, we outperform a number of systems purpose-built for this naviga- tion task. We also outperform both variants of our system, most conspicuously the variant with- out grounding graphs. This highlights the impor- tance of compositional structure. Recent work by Kim and Mooney (2013) and Artzi et al. (2014) has achieved better results; these systems make use of techniques and resources (respectively, dis- criminative reranking and a seed lexicon of hand- annotated logical forms) that are largely orthogo- nal to the ones used here, and might be applied to improve our own results as well.
Puzzle solving The last task we consider is the Crossblock task studied by Branavan et al. (2009) (Figure 1c). Here, again, natural language is used to specify a sequence of actions, in this case the solution to a simple game. The environment is simple enough to be captured with a ï¬at feature
5We speciï¬cally targeted the single-sentence version of this evaluation, as an alternative full-sequence evaluation does not align precisely with our data condition.
Match (%) Success (%) No text Branavan â09 54 63 78 â Model [no planning] Model [full] 64 70 66 86 | 1508.06491#30 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 30 | xk + mj jâMk (11)
where xk is the word embedding (as in a word-level model) and Mk â M is the set of morphemes for word k. The morphemes are obtained by running an unsupervised mor- phological tagger as a preprocessing step.11 We emphasize that the word embedding itself (i.e. xk) is added on top of the morpheme embeddings, as was done in Botha and Blunsom (2014). The morpheme embeddings are of size 200/650 for the small/large models respectively. We further train word- level LSTM models as another baseline.
On DATA-S it is clear from Table 4 that the character- level models outperform their word-level counterparts de11We use Morfessor Cat-MAP (Creutz and Lagus 2007), as in Botha and Blunsom (2014).
DATA-L CS DE ES FR RU Botha KN-4 MLBL 862 643 463 404 219 203 243 227 390 300 Small Word Morph Char 701 615 578 347 331 305 186 189 169 202 209 190 353 331 313 EN 291 273 236 233 216 | 1508.06615#30 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 31 | Match (%) Success (%) No text Branavan â09 54 63 78 â Model [no planning] Model [full] 64 70 66 86
Table 4: Results for the puzzle solving task. âMatchâ shows the percentage of predicted action sequences that exactly match the annotation. âSuccessâ shows the percentage of predicted action sequences that result in a winning game con- ï¬guration, regardless of the action sequence performed. Fol- lowing Branavan et al. (2009), we average across ï¬ve random train / test folds. Our model achieves state-of-the-art results on this task.
representation, so there is no distinction between the full model and the variant without grounding graphs.
Unlike the other tasks we consider, Crossblock is distinguished by a challenging associated search problem. Here it is nontrivial to ï¬nd any sequence that eliminates all the blocks (the goal of the puz- zle). Thus this example allows us measure the ef- fectiveness of our search procedure. | 1508.06491#31 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 31 | Table 5: Test set perplexities on DATA-L. First two rows are from Botha (2014), while the last three rows are from the small LSTM models described in the paper. KN-4 is a Kneser-Ney 4-gram lan- guage model, and MLBL is the best performing morphological log- bilinear model from Botha (2014). Word/Morph/Char are models with words/morphemes/characters as inputs respectively.
spite, again, being smaller.12 The character models also out- perform their morphological counterparts (both MLBL and LSTM architectures), although improvements over the mor- phological LSTMs are more measured. Note that the mor- pheme models have strictly more parameters than the word models because word embeddings are used as part of the in- put. | 1508.06615#31 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 32 | Results are shown in Table 4. As can be seen, our model achieves state-of-the-art performance on this task when attempting to match the human- speciï¬ed plan exactly. If we are purely concerned with task completion (i.e. solving the puzzle, per- haps not with the exact set of moves speciï¬ed in the instructions) we can measure this directly. Here, too, we substantially outperform a no-text baseline. Thus it can be seen that text induces a useful heuristic, allowing the model to solve a con- siderable fraction of problem instances not solved by na¨ıve beam search.
The problem of inducing planning heuristics from side information like text is an important one in its own right, and future work might focus speciï¬cally on coupling our system with a more sophisticated planner. Even at present, the re- sults in this section demonstrate the importance of lookahead and high-level reasoning in instruction following.
# 7 Conclusion
We have described a new alignment-based com- positional model for following sequences of nat- ural language instructions, and demonstrated the effectiveness of this model on a variety of tasks. A fully general solution to the problem of contextual interpretation must address a wide range of well- studied problems, but the work we have described
here provides modular interfaces for the study of a number of fundamental linguistic issues from a machine learning perspective. These include: | 1508.06491#32 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 32 | Due to memory constraints13 we only train the small models on DATA-L (Table 5). Interestingly we do not ob- serve signiï¬cant differences going from word to morpheme LSTMs on Spanish, French, and English. The character models again outperform the word/morpheme models. We also observe signiï¬cant perplexity reductions even on En- glish when V is large. We conclude this section by noting that we used the same architecture for all languages and did not perform any language-speciï¬c tuning of hyperparame- ters.
Discussion Learned Word Representations We explore the word representations learned by the models on the PTB. Table 6 has the nearest neighbors of word rep- resentations learned from both the word-level and character- level models. For the character models we compare the rep- resentations obtained before and after highway layers. | 1508.06615#32 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 33 | here provides modular interfaces for the study of a number of fundamental linguistic issues from a machine learning perspective. These include:
Pragmatics How do we respond to presup- position failures, and choose among possible interpretations of an instruction disambiguated only by context? The mechanism provided by the sequence-prediction architecture we have de- scribed provides a simple answer to this ques- tion, and our experimental results demonstrate that the learned pragmatics aid interpretation of in- structions in a number of concrete ways: am- biguous references are resolved by proximity in the map reading task, missing steps are inferred from an environment model in the maze naviga- tion task, and vague hints are turned into real plans by knowledge of the rules in Crossblock. A more comprehensive solution might explicitly describe the process by which instruction-giversâ own be- liefs (expressed as distributions over sequences) give rise to instructions.
Compositional semantics The graph alignment model of semantics presented here is an expres- sive and computationally efï¬cient generalization of classical logical techniques to accommodate en- vironments like the map task, or those explored in our previous work (Andreas and Klein, 2014). More broadly, our model provides a compositional approach to semantics that does not require an explicit formal language for encoding sentence meaning. Future work might extend this approach to tasks like question answering, where logic- based approaches have been successful. | 1508.06491#33 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 33 | Before the highway layers the representations seem to solely rely on surface formsâfor example the nearest neigh- bors of you are your, young, four, youth, which are close to you in terms of edit distance. The highway layers however, seem to enable encoding of semantic features that are not discernable from orthography alone. After highway layers the nearest neighbor of you is we, which is orthographically distinct from you. Another example is while and thoughâ these words are far apart edit distance-wise yet the composi- tion model is able to place them near each other. The model
12The difference in parameters is greater for non-PTB corpora as the size of the word model scales faster with |V|. For example, on Arabic the small/large word models have 35m/121m parameters while the corresponding character models have 29m/69m parame- ters respectively.
13All models were trained on GPUs with 2GB memory. | 1508.06615#33 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 34 | Our primary goal in this paper has been to ex- plore methods for integrating compositional se- mantics and the pragmatic context provided by se- quential structures. While there is a great deal of work left to do, we ï¬nd it encouraging that this general approach results in substantial gains across multiple tasks and contexts.
# Acknowledgments
The authors would like to thank S.R.K. Brana- van for assistance with the Crossblock evaluation. The ï¬rst author is supported by a National Science Foundation Graduate Fellowship.
# References
Anne H. Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, et al. 1991. The HCRC map task corpus. Language and speech, 34(4):351â366.
Jacob Andreas and Dan Klein. 2014. Grounding lan- guage with points and paths in continuous spaces. In Proceedings of the Conference on Natural Language Learning.
Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transactions of the Associa- tion for Computational Linguistics, 1(1):49â62. | 1508.06491#34 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 34 | Figure 2: Plot of character n-gram representations via PCA for English. Colors correspond to: preï¬xes (red), sufï¬xes (blue), hy- phenated (orange), and all others (grey). Preï¬xes refer to character n-grams which start with the start-of-word character. Sufï¬xes like- wise refer to character n-grams which end with the end-of-word character.
also makes some clear mistakes (e.g. his and hhs), highlight- ing the limits of our approach, although this could be due to the small dataset.
The learned representations of OOV words (computer- aided, misinformed) are positioned near words with the same part-of-speech. The model is also able to correct for incorrect/non-standard spelling (looooook), indicating po- tential applications for text normalization in noisy domains. | 1508.06615#34 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 35 | 2014. Learning compact lexicons for CCG semantic pars- In Proceedings of the Conference on Empiri- ing. cal Methods in Natural Language Processing, pages 1273â1283, Doha, Qatar, October. Association for Computational Linguistics.
Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics, page 92.
S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics, pages 82â90. Association for Computational Linguistics.
S.R.K. Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a Monte-Carlo framework. In Proceedings of the Hu- man Language Technology Conference of the Asso- ciation for Computational Linguistics, pages 268â 277.
Peter Brown, Vincent Della Pietra, Stephen Della Pietra, and Robert Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2):263â311, June. | 1508.06491#35 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 35 | Learned Character N -gram Representations As discussed previously, each ï¬lter of the CharCNN is es- sentially learning to detect particular character n-grams. Our initial expectation was that each ï¬lter would learn to activate on different morphemes and then build up semantic repre- sentations of words from the identiï¬ed morphemes. How- ever, upon reviewing the character n-grams picked up by the ï¬lters (i.e. those that maximized the value of the ï¬lter), we found that they did not (in general) correspond to valid morphemes. | 1508.06615#35 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 36 | David L. Chen and Raymond J. Mooney. 2011. Learn- ing to interpret natural language navigation instruc- tions from observations. In Proceedings of the Meet- ing of the Association for the Advancement of Artiï¬- cial Intelligence, volume 2, pages 1â2.
David L Chen. 2012. Fast online lexicon learning for In Proceedings of grounded language acquisition. the Annual Meeting of the Association for Computa- tional Linguistics, pages 430â439.
Kais Dukes. 2013. Semantic annotation of robotic spa- tial commands. In Language and Technology Con- ference (LTC).
Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyper- In Proceedings of edge replacement grammars. the International Conference on Computational Linguistics, pages 1359â1376.
Joohyun Kim and Raymond J. Mooney. 2012. Un- supervised PCFG induction for grounded language learning with highly ambiguous supervision. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 433â444.
Joohyun Kim and Raymond J. Mooney. 2013. Adapt- ing discriminative reranking to grounded language learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. | 1508.06491#36 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 36 | To get a better intuition for what the character composi- tion model is learning, we plot the learned representations of all character n-grams (that occurred as part of at least two words in V) via principal components analysis (Figure 2). We feed each character n-gram into the CharCNN and use the CharCNNâs output as the ï¬xed dimensional representa- tion for the corresponding character n-gram. As is appar- ent from Figure 2, the model learns to differentiate between preï¬xes (red), sufï¬xes (blue), and others (grey). We also ï¬nd that the representations are particularly sensitive to character n-grams containing hyphens (orange), presumably because this is a strong signal of a wordâs part-of-speech.
Highway Layers We quantitatively investigate the effect of highway network layers via ablation studies (Table 7). We train a model with- out any highway layers, and ï¬nd that performance decreases signiï¬cantly. As the difference in performance could be due to the decrease in model size, we also train a model that feeds yk (i.e. word representation from the CharCNN) | 1508.06615#36 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 37 | Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-ï¬y ontology matching. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing.
Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389â446.
Dong Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1-3):503â528.
Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. Pro- ceedings of the Meeting of the Association for the Advancement of Artiï¬cial Intelligence, 2(6):4.
2015. A Bayesian model of grounded color semantics. Transactions of the Association for Computational Linguistics, 3:103â115.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text- based games using deep reinforcement learning. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing. | 1508.06491#37 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 37 | In Vocabulary Out-of-Vocabulary while his you richard trading computer-aided misinformed looooook LSTM-Word although letting though minute your her my their conservatives we guys i jonathan robert neil nancy advertised advertising turnover turnover â â â â â â â â â â â â LSTM-Char (before highway) chile whole meanwhile white this hhs is has your young four youth hard rich richer richter heading training reading leading computer-guided computerized disk-drive computer informed performed transformed inform look cook looks shook LSTM-Char (after highway) meanwhile whole though nevertheless hhs this their your we your doug i eduard gerard edward carl trade training traded trader computer-guided computer-driven computerized computer informed performed outperformed transformed look looks looked looking
Table 6: Nearest neighbor words (based on cosine similarity) of word representations from the large word-level and character-level (before and after highway layers) models trained on the PTB. Last three words are OOV words, and therefore they do not have representations in the word-level model.
LSTM-Char Small Large No Highway Layers One Highway Layer Two Highway Layers One MLP Layer 100.3 92.3 90.1 111.2 84.6 79.7 78.9 92.6 | 1508.06615#37 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 38 | Terence Parsons. 1990. Events in the semantics of En- glish. MIT Press.
Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics.
Nima Pourdamghani, Yang Gao, Ulf Hermjakob, and Kevin Knight. 2014. Aligning english strings with abstract meaning representation graphs. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing.
Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Associ- ation for Computational Linguistics, 2:377â392.
Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding nat- ural language commands for robotic navigation and mobile manipulation. In In Proceedings of the Na- tional Conference on Artiï¬cial Intelligence.
Andreas Vlachos and Stephen Clark. 2014. A new cor- pus and imitation learning framework for context- dependent semantic parsing. Transactions of the As- sociation for Computational Linguistics, 2:547â559. | 1508.06491#38 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 38 | |V| 10 k 25 k 50 k 100 k T 1 m 17% 16% 21% 5 m 10 m 25 m â 8% 14% 16% 21% 9% 12% 15% 9% 9% 10% 8% 9%
Table 7: Perplexity on the Penn Treebank for small/large models trained with/without highway layers.
through a one-layer multilayer perceptron (MLP) to use as input into the LSTM. We ï¬nd that the MLP does poorly, al- though this could be due to optimization issues.
Table 8: Perplexity reductions by going from small word-level to character-level models based on different corpus/vocabulary sizes on German (DE). |V| is the vocabulary size and T is the number of tokens in the training set. The full vocabulary of the 1m dataset was less than 100k and hence that scenario is unavailable. | 1508.06615#38 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06491 | 39 | Adam Vogel and Dan Jurafsky. 2010. Learning to In Proceedings of follow navigational directions. the Annual Meeting of the Association for Compu- tational Linguistics, pages 806â814. Association for Computational Linguistics.
Yuk Wah Wong and Raymond Mooney. 2006. Learn- ing for semantic parsing with statistical machine In Proceedings of the Human Lan- translation. guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 439â446, New York, New York.
2005. Learning to map sentences to logical form: Struc- tured classiï¬cation with probabilistic categorial grammars. In Proceedings of the Conference on Un- certainty in Artiï¬cial Intelligence, pages 658â666. | 1508.06491#39 | Alignment-based compositional semantics for instruction following | This paper describes an alignment-based model for interpreting natural
language instructions in context. We approach instruction following as a search
over plans, scoring sequences of actions conditioned on structured observations
of text and the environment. By explicitly modeling both the low-level
compositional structure of individual actions and the high-level structure of
full plans, we are able to learn both grounded representations of sentence
meaning and pragmatic constraints on interpretation. To demonstrate the model's
flexibility, we apply it to a diverse set of benchmark tasks. On every task, we
outperform strong task-specific baselines, and achieve several new
state-of-the-art results. | http://arxiv.org/pdf/1508.06491 | Jacob Andreas, Dan Klein | cs.CL | in proceedings of EMNLP 2015 | null | cs.CL | 20150826 | 20170412 | [] |
1508.06615 | 39 | We hypothesize that highway networks are especially well-suited to work with CNNs, adaptively combining lo- cal features detected by the individual ï¬lters. CNNs have already proven to be been successful for many NLP tasks (Collobert et al. 2011; Shen et al. 2014; Kalchbrenner, Grefenstette, and Blunsom 2014; Kim 2014; Zhang, Zhao, and LeCun 2015; Lei, Barzilay, and Jaakola 2015), and we posit that further gains could be achieved by employing highway layers on top of existing CNN architectures.
ity reductions as a result of going from a small word-level model to a small character-level model. To vary the vocabu- lary size we take the most frequent k words and replace the rest with <unk>. As with previous experiments the character model does not utilize surface forms of <unk> and simply treats it as another token. Although Table 8 suggests that the perplexity reductions become less pronounced as the corpus size increases, we nonetheless ï¬nd that the character-level model outperforms the word-level model in all scenarios. | 1508.06615#39 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 40 | We also anecdotally note that (1) having one to two high- way layers was important, but more highway layers gener- ally resulted in similar performance (though this may de- pend on the size of the datasets), (2) having more convolu- tional layers before max-pooling did not help, and (3) high- way layers did not improve models that only used word em- beddings as inputs.
# Effect of Corpus/Vocab Sizes
We next study the effect of training corpus/vocabulary sizes on the relative performance between the different models. We take the German (DE) dataset from DATA-L and vary the training corpus/vocabulary sizes, calculating the perplex# Further Observations
We report on some further experiments and observations:
⢠Combining word embeddings with the CharCNNâs out- put to form a combined representation of a word (to be used as input to the LSTM) resulted in slightly worse performance (81 on PTB with a large model). This was surprising, as improvements have been reported on part- of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos and Guimaraes 2015) by concatenating word embeddings with the out- put from a character-level CNN. While this could be due | 1508.06615#40 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 41 | to insufï¬cient experimentation on our part,14 it suggests that for some tasks, word embeddings are superï¬uousâ character inputs are good enough.
⢠While our model requires additional convolution opera- tions over characters and is thus slower than a comparable word-level model which can perform a simple lookup at the input layer, we found that the difference was manage- able with optimized GPU implementationsâfor example on PTB the large character-level model trained at 1500 to- kens/sec compared to the word-level model which trained at 3000 tokens/sec. For scoring, our model can have the same running time as a pure word-level model, as the CharCNNâs outputs can be pre-computed for all words in V. This would, however, be at the expense of increased model size, and thus a trade-off can be made between run-time speed and memory (e.g. one could restrict the pre-computation to the most frequent words). | 1508.06615#41 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 42 | Related Work Neural Language Models (NLM) encompass a rich fam- ily of neural network architectures for language modeling. Some example architectures include feed-forward (Bengio, Ducharme, and Vincent 2003), recurrent (Mikolov et al. 2010), sum-product (Cheng et al. 2014), log-bilinear (Mnih and Hinton 2007), and convolutional (Wang et al. 2015) net- works.
In order to address the rare word problem, Alexandrescu and Kirchhoff (2006)âbuilding on analogous work on count-based n-gram language models by Bilmes and Kirch- hoff (2003)ârepresent a word as a set of shared factor em- beddings. Their Factored Neural Language Model (FNLM) can incorporate morphemes, word shape information (e.g. capitalization) or any other annotation (e.g. part-of-speech tags) to represent words. | 1508.06615#42 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 43 | A speciï¬c class of FNLMs leverages morphemic infor- mation by viewing a word as a function of its (learned) morpheme embeddings (Luong, Socher, and Manning 2013; Botha and Blunsom 2014; Qui et al. 2014). For example Lu- ong, Socher, and Manning (2013) apply a recursive neural network over morpheme embeddings to obtain the embed- ding for a single word. While such models have proved use- ful, they require morphological tagging as a preprocessing step.
Another direction of work has involved purely character- level NLMs, wherein both input and output are charac- ters (Sutskever, Martens, and Hinton 2011; Graves 2013). Character-level models obviate the need for morphological tagging or manual feature engineering, and have the attrac- tive property of being able to generate novel words. How- ever they are generally outperformed by word-level models (Mikolov et al. 2012).
improvements have been reported on part-of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos | 1508.06615#43 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 44 | improvements have been reported on part-of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos
14We experimented with (1) concatenation, (2) tensor products, (3) averaging, and (4) adaptive weighting schemes whereby the model learns a convex combination of word embeddings and the CharCNN outputs.
and Guimaraes 2015) by representing a word as a concatena- tion of its word embedding and an output from a character- level CNN, and using the combined representation as fea- tures in a Conditional Random Field (CRF). Zhang, Zhao, and LeCun (2015) do away with word embeddings com- pletely and show that for text classiï¬cation, a deep CNN over characters performs well. Ballesteros, Dyer, and Smith (2015) use an RNN over characters only to train a transition- based parser, obtaining improvements on many morpholog- ically rich languages.
Finally, Ling et al. (2015) apply a bi-directional LSTM over characters to use as inputs for language modeling and part-of-speech tagging. They show improvements on various languages (English, Portuguese, Catalan, German, Turkish). It remains open as to which character composition model (i.e. CNN or LSTM) performs better. | 1508.06615#44 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 45 | Conclusion We have introduced a neural language model that utilizes only character-level inputs. Predictions are still made at the word-level. Despite having fewer parameters, our model outperforms baseline models that utilize word/morpheme embeddings in the input layer. Our work questions the ne- cessity of word embeddings (as inputs) for neural language modeling.
Analysis of word representations obtained from the char- acter composition part of the model further indicates that the model is able to encode, from characters only, rich se- mantic and orthographic features. Using the CharCNN and highway layers for representation learning (e.g. as input into word2vec (Mikolov et al. 2013)) remains an avenue for fu- ture work.
Insofar as sequential processing of words as inputs is ubiquitous in natural language processing, it would be in- teresting to see if the architecture introduced in this paper is viable for other tasksâfor example, as an encoder/decoder in neural machine translation (Cho et al. 2014; Sutskever, Vinyals, and Le 2014).
Acknowledgments We are especially grateful to Jan Botha for providing the preprocessed datasets and the model results. | 1508.06615#45 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 46 | Acknowledgments We are especially grateful to Jan Botha for providing the preprocessed datasets and the model results.
References Alexandrescu, A., and Kirchhoff, K. 2006. Factored Neural Lan- guage Models. In Proceedings of NAACL. Ballesteros, M.; Dyer, C.; and Smith, N. A. Im- proved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP. Bengio, Y.; Ducharme, R.; and Vincent, P. 2003. A Neural Prob- abilistic Language Model. Journal of Machine Learning Research 3:1137â1155. Bengio, Y.; Simard, P.; and Frasconi, P. 1994. Learning Long-term Dependencies with Gradient Descent is Difï¬cult. IEEE Transac- tions on Neural Networks 5:157â166. Bilmes, J., and Kirchhoff, K. 2003. Factored Language Models and Generalized Parallel Backoff. In Proceedings of NAACL.
Botha, J., and Blunsom, P. 2014. Compositional Morphology for Word Representations and Language Modelling. In Proceedings of ICML.
Botha, J. 2014. Probabilistic Modelling of Morphologically Rich Languages. DPhil Dissertation, Oxford University. | 1508.06615#46 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 47 | Botha, J. 2014. Probabilistic Modelling of Morphologically Rich Languages. DPhil Dissertation, Oxford University.
Chen, S., and Goodman, J. 1998. An Empirical Study of Smooth- ing Techniques for Language Modeling. Technical Report, Har- vard University.
Cheng, W. C.; Kok, S.; Pham, H. V.; Chieu, H. L.; and Chai, K. M. 2014. Language Modeling with Sum-Product Networks. In Pro- ceedings of INTERSPEECH.
Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Ma- chine Translation. In Proceedings of EMNLP.
Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; and Kuksa, P. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research 12:2493â2537.
Creutz, M., and Lagus, K. 2007. Unsupervised Models for Mor- pheme Segmentation and Morphology Learning. In Proceedings of the ACM Transations on Speech and Language Processing. | 1508.06615#47 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 48 | Deerwester, S.; Dumais, S.; and Harshman, R. 1990. Indexing by Latent Semantic Analysis. Journal of American Society of Infor- mation Science 41:391â407.
dos Santos, C. N., and Guimaraes, V. 2015. Boosting Named Entity Recognition with Neural Character Embeddings. In Proceedings of ACL Named Entities Workshop.
dos Santos, C. N., and Zadrozny, B. 2014. Learning Character- level Representations for Part-of-Speech Tagging. In Proceedings of ICML.
Graves, A. 2013. Generating Sequences with Recurrent Neural Networks. arXiv:1308.0850.
Hinton, G.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2012. Improving Neural Networks by Prevent- ing Co-Adaptation of Feature Detectors. arxiv:1207.0580.
Hochreiter, S., and Schmidhuber, J. 1997. Long Short-Term Mem- ory. Neural Computation 9:1735â1780. | 1508.06615#48 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 49 | Hochreiter, S., and Schmidhuber, J. 1997. Long Short-Term Mem- ory. Neural Computation 9:1735â1780.
Kalchbrenner, N.; Grefenstette, E.; and Blunsom, P. 2014. A Con- volutional Neural Network for Modelling Sentences. In Proceed- ings of ACL.
Kim, Y. 2014. Convolutional Neural Networks for Sentence Clas- siï¬cation. In Proceedings of EMNLP.
ImageNet Krizhevsky, A.; Sutskever, I.; and Hinton, G. 2012. Classiï¬cation with Deep Convolutional Neural Networks. In Pro- ceedings of NIPS.
LeCun, Y.; Boser, B.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W.; and Jackel, L. D. 1989. Handwritten Digit Recogni- tion with a Backpropagation Network. In Proceedings of NIPS.
Lei, T.; Barzilay, R.; and Jaakola, T. 2015. Molding CNNs for Text: Non-linear, Non-consecutive Convolutions. In Proceedings of EMNLP. | 1508.06615#49 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 51 | Marcus, M.; Santorini, B.; and Marcinkiewicz, M. 1993. Building a Large Annotated Corpus of English: the Penn Treebank. Compu- tational Linguistics 19:331â330. Mikolov, T., and Zweig, G. 2012. Context Dependent Recurrent Neural Network Language Model. In Proceedings of SLT. Mikolov, T.; Karaï¬at, M.; Burget, L.; Cernocky, J.; and Khudanpur, S. 2010. Recurrent Neural Network Based Language Model. In Proceedings of INTERSPEECH. Mikolov, T.; Deoras, A.; Kombrink, S.; Burget, L.; and Cernocky, J. 2011. Empirical Evaluation and Combination of Advanced Lan- guage Modeling Techniques. In Proceedings of INTERSPEECH. Mikolov, T.; Sutskever, I.; Deoras, A.; Le, H.-S.; Kombrink, S.; and Cernocky, J. 2012. Subword Language Modeling with Neural Networks. preprint: www.ï¬t.vutbr.cz/Ëimikolov/rnnlm/char.pdf. Mikolov, T.; | 1508.06615#51 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 52 | preprint: www.ï¬t.vutbr.cz/Ëimikolov/rnnlm/char.pdf. Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Ef- ï¬cient Estimation of Word Representations in Vector Space. arXiv:1301.3781. Mnih, A., and Hinton, G. 2007. Three New Graphical Models for Statistical Language Modelling. In Proceedings of ICML. Morin, F., and Bengio, Y. 2005. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of AISTATS. Pascanu, R.; Culcehre, C.; Cho, K.; and Bengio, Y. 2013. How to Construct Deep Neural Networks. arXiv:1312.6026. Qui, S.; Cui, Q.; Bian, J.; and Gao, B. 2014. Co-learning of Word Representations and Morpheme Representations. In Proceedings of COLING. Shen, Y.; He, X.; Gao, J.; Deng, L.; and Mesnil, G. 2014. A Latent Semantic Model with Convolutional-pooling Structure for Infor- mation | 1508.06615#52 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 53 | J.; Deng, L.; and Mesnil, G. 2014. A Latent Semantic Model with Convolutional-pooling Structure for Infor- mation Retrieval. In Proceedings of CIKM. Srivastava, R. K.; Greff, K.; and Schmidhuber, J. 2015. Training Very Deep Networks. arXiv:1507.06228. Sundermeyer, M.; Schluter, R.; and Ney, H. 2012. LSTM Neural Networks for Language Modeling. Sutskever, I.; Martens, J.; and Hinton, G. 2011. Generating Text with Recurrent Neural Networks. Sutskever, I.; Vinyals, O.; and Le, Q. 2014. Sequence to Sequence Learning with Neural Networks. Wang, M.; Lu, Z.; Li, H.; Jiang, W.; and Liu, Q. 2015. genCNN: In A Convolutional Architecture for Word Sequence Prediction. Proceedings of ACL. Werbos, P. 1990. Back-propagation Through Time: what it does and how to do it. In Proceedings of IEEE. Zaremba, W.; Sutskever, I.; and Vinyals, O. 2014. Recurrent Neural Network | 1508.06615#53 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.06615 | 54 | to do it. In Proceedings of IEEE. Zaremba, W.; Sutskever, I.; and Vinyals, O. 2014. Recurrent Neural Network Regularization. arXiv:1409.2329. Zhang, S.; Jiang, H.; Xu, M.; Hou, J.; and Dai, L. 2015. The Fixed- Size Ordinally-Forgetting Encoding Method for Neural Network Language Models. In Proceedings of ACL. Zhang, X.; Zhao, J.; and LeCun, Y. 2015. Character-level Convo- lutional Networks for Text Classiï¬cation. In Proceedings of NIPS. | 1508.06615#54 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | [
{
"id": "1507.06228"
}
] |
1508.05326 | 0 | 5 1 0 2
g u A 1 2 ] L C . s c [
1 v 6 2 3 5 0 . 8 0 5 1 : v i X r a
# A large annotated corpus for learning natural language inference
# Samuel R. Bowmanââ [email protected]
# Gabor Angeliâ â¡ [email protected]
# Christopher Pottsâ [email protected]
Christopher D. Manningââ â¡ [email protected]
âStanford Linguistics â Stanford NLP Group â¡Stanford Computer Science
# Abstract | 1508.05326#0 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 1 | Christopher D. Manningââ â¡ [email protected]
âStanford Linguistics â Stanford NLP Group â¡Stanford Computer Science
# Abstract
Understanding entailment and contradic- tion is fundamental to understanding nat- ural language, and inference about entail- ment and contradiction is a valuable test- ing ground for the development of seman- tic representations. However, machine learning research in this area has been dra- matically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by hu- mans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This in- crease in scale allows lexicalized classi- ï¬ers to outperform some sophisticated ex- isting entailment models, and it allows a neural network-based model to perform competitively on natural language infer- ence benchmarks for the ï¬rst time.
# Introduction | 1508.05326#1 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 2 | # Introduction
for approaches employing distributed word and phrase representations. Distributed representa- tions excel at capturing relations based in similar- ity, and have proven effective at modeling simple dimensions of meaning like evaluative sentiment (e.g., Socher et al. 2013), but it is less clear that they can be trained to support the full range of logical and commonsense inferences required for NLI (Bowman et al., 2015; Weston et al., 2015b; In a SemEval 2014 task Weston et al., 2015a). aimed at evaluating distributed representations for NLI, the best-performing systems relied heavily on additional features and reasoning capabilities (Marelli et al., 2014a).
Our ultimate objective is to provide an empiri- cal evaluation of learning-centered approaches to NLI, advancing the case for NLI as a tool for the evaluation of domain-general approaches to semantic representation. However, in our view, existing NLI corpora do not permit such an as- sessment. They are generally too small for train- ing modern data-intensive, wide-coverage models, many contain sentences that were algorithmically generated, and they are often beset with indeter- minacies of event and entity coreference that sig- niï¬cantly impact annotation quality. | 1508.05326#2 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 3 | The semantic concepts of entailment and contra- diction are central to all aspects of natural lan- guage meaning (Katz, 1972; van Benthem, 2008), from the lexicon to the content of entire texts. Thus, natural language inference (NLI) â charac- terizing and using these relations in computational systems (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009) â is essential in tasks ranging from information retrieval to seman- tic parsing to commonsense reasoning.
NLI has been addressed using a variety of tech- niques, including those based on symbolic logic, knowledge bases, and neural networks. In recent years, it has become an important testing ground | 1508.05326#3 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 4 | NLI has been addressed using a variety of tech- niques, including those based on symbolic logic, knowledge bases, and neural networks. In recent years, it has become an important testing ground
To address this, this paper introduces the Stan- ford Natural Language Inference (SNLI) corpus, a collection of sentence pairs labeled for entail- ment, contradiction, and semantic independence. At 570,152 sentence pairs, SNLI is two orders of magnitude larger than all other resources of its type. And, in contrast to many such resources, all of its sentences and labels were written by hu- mans in a grounded, naturalistic context. In a sepa- rate validation phase, we collected four additional judgments for each label for 56,941 of the exam- ples. Of these, 98% of cases emerge with a three- annotator consensus, and 58% see a unanimous consensus from all ï¬ve annotators.
In this paper, we use this corpus to evaluate | 1508.05326#4 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 5 | In this paper, we use this corpus to evaluate
A man inspects the uniform of a ï¬gure in some East Asian country. contradiction C C C C C The man is sleeping An older and younger man smiling. neutral N N E N N Two men are smiling and laughing at the cats play- ing on the ï¬oor. A black race car starts up in front of a crowd of people. contradiction C C C C C A man is driving down a lonely road. A soccer game with multiple males playing. entailment E E E E E Some men are playing a sport. A smiling costumed woman is holding an um- brella. neutral N N E C N A happy woman in a fairy costume holds an um- brella.
Table 1: Randomly chosen examples from the development section of our new corpus, shown with both the selected gold labels and the full set of labels (abbreviated) from the individual annotators, including (in the ï¬rst position) the label used by the initial author of the pair. | 1508.05326#5 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 6 | a variety of models for natural language infer- ence, including rule-based systems, simple lin- ear classiï¬ers, and neural network-based models. We ï¬nd that two models achieve comparable per- formance: a feature-rich classiï¬er model and a neural network model centered around a Long Short-Term Memory network (LSTM; Hochreiter and Schmidhuber 1997). We further evaluate the LSTM model by taking advantage of its ready sup- port for transfer learning, and show that it can be adapted to an existing NLI challenge task, yielding the best reported performance by a neural network model and approaching the overall state of the art.
# 2 A new corpus for NLI | 1508.05326#6 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 7 | To date, the primary sources of annotated NLI cor- pora have been the Recognizing Textual Entail- ment (RTE) challenge tasks.1 These are generally high-quality, hand-labeled data sets, and they have stimulated innovative logical and statistical mod- els of natural language reasoning, but their small size (fewer than a thousand examples each) limits their utility as a testbed for learned distributed rep- resentations. The data for the SemEval 2014 task called Sentences Involving Compositional Knowl- edge (SICK) is a step up in terms of size, but only to 4,500 training examples, and its partly automatic construction introduced some spurious patterns into the data (Marelli et al. 2014a, §6). The Denotation Graph entailment set (Young et al., 2014) contains millions of examples of en- tailments between sentences and artiï¬cially con- structed short phrases, but it was labeled using fully automatic methods, and is noisy enough that it is probably suitable only as a source of supplementary training data. Outside the domain of sentence-level entailment, Levy et al. (2014) intro- duce a large corpus of semi-automatically | 1508.05326#7 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 8 | training data. Outside the domain of sentence-level entailment, Levy et al. (2014) intro- duce a large corpus of semi-automatically anno- tated entailment examples between subjectâverbâ object relation triples, and the second release of the Paraphrase Database (Pavlick et al., 2015) in- cludes automatically generated entailment anno- tations over a large corpus of pairs of words and short phrases. | 1508.05326#8 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 9 | Existing resources suffer from a subtler issue impacts even projects using only human- that provided annotations: indeterminacies of event and entity coreference lead to insurmountable in- determinacy concerning the correct semantic la- bel (de Marneffe et al. 2008 §4.3; Marelli et al. 2014b). For an example of the pitfalls surround- ing entity coreference, consider the sentence pair A boat sank in the Paciï¬c Ocean and A boat sank in the Atlantic Ocean. The pair could be labeled as a contradiction if one assumes that the two sen- tences refer to the same single event, but could also be reasonably labeled as neutral if that as- sumption is not made. In order to ensure that our labeling scheme assigns a single correct label to every pair, we must select one of these approaches across the board, but both choices present prob- lems. If we opt not to assume that events are coreferent, then we will only ever ï¬nd contradic- tions between sentences that make broad univer- sal assertions, but if we opt to assume coreference, new counterintuitive predictions emerge. For ex- ample, Ruth Bader Ginsburg was appointed to the US Supreme Court and I had a sandwich | 1508.05326#9 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 11 | # 1http://aclweb.org/aclwiki/index.php?
title=Textual_Entailment_Resource_Pool
York and A tourist visited the city. Assuming coreference between New York and the city justi- ï¬es labeling the pair as an entailment, but with- out that assumption the city could be taken to refer to a speciï¬c unknown city, leaving the pair neu- tral. This kind of indeterminacy of label can be re- solved only once the questions of coreference are resolved.
With SNLI, we sought to address the issues of size, quality, and indeterminacy. To do this, we employed a crowdsourcing framework with the following crucial innovations. First, the exam- ples were grounded in speciï¬c scenarios, and the premise and hypothesis sentences in each exam- ple were constrained to describe that scenario from the same perspective, which helps greatly in con- trolling event and entity coreference.2 Second, the prompt gave participants the freedom to produce entirely novel sentences within the task setting, which led to richer examples than we see with the more proscribed string-editing techniques of ear- lier approaches, without sacriï¬cing consistency. Third, a subset of the resulting sentences were sent to a validation task aimed at providing a highly re- liable set of annotations over the same data, and at identifying areas of inferential uncertainty. | 1508.05326#11 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 12 | # 2.1 Data collection
We used Amazon Mechanical Turk for data col- lection. In each individual task (each HIT), a worker was presented with premise scene descrip- tions from a pre-existing corpus, and asked to supply hypotheses for each of our three labelsâ entailment, neutral, and contradictionâforcing the data to be balanced among these classes.
The instructions that we provided to the work- ers are shown in Figure 1. Below the instructions were three ï¬elds for each of three requested sen- tences, corresponding to our entailment, neutral, and contradiction labels, a fourth ï¬eld (marked optional) for reporting problems, and a link to an FAQ page. That FAQ grew over the course of data collection. It warned about disallowed tech- niques (e.g., reusing the same sentence for many different prompts, which we saw in a few cases), provided guidance concerning sentence length and
2 Issues of coreference are not completely solved, but greatly mitigated. For example, with the premise sentence A dog is lying in the grass, a worker could safely assume that the dog is the most prominent thing in the photo, and very likely the only dog, and build contradicting sentences assum- ing reference to the same dog.
We will show you the caption for a photo. We will not show you the photo. Using only the caption and what you know about the world: | 1508.05326#12 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 13 | We will show you the caption for a photo. We will not show you the photo. Using only the caption and what you know about the world:
⢠Write one alternate caption that is deï¬nitely a true description of the photo. Example: For the caption âTwo dogs are running through a ï¬eld.â you could write âThere are animals outdoors.â
⢠Write one alternate caption that might be a true description of the photo. Example: For the cap- tion âTwo dogs are running through a ï¬eld.â you could write âSome puppies are running to catch a stick.â
⢠Write one alternate caption that is deï¬nitely a false description of the photo. Example: For the caption âTwo dogs are running through a ï¬eld.â you could write âThe pets are sitting on a couch.â This is different from the maybe correct category because itâs impossible for the dogs to be both running and sitting.
Figure 1: The instructions used on Mechanical Turk for data collection.
complexity (we did not enforce a minimum length, and we allowed bare NPs as well as full sen- tences), and reviewed logistical issues around pay- ment timing. About 2,500 workers contributed. | 1508.05326#13 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 14 | For the premises, we used captions from the Flickr30k corpus (Young et al., 2014), a collection of approximately 160k captions (corresponding to about 30k images) collected in an earlier crowd- sourced effort.3 The captions were not authored by the photographers who took the source images, and they tend to contain relatively literal scene de- scriptions that are suited to our approach, rather than those typically associated with personal pho- tographs (as in their example: Our trip to the Olympic Peninsula). In order to ensure that the la- bel for each sentence pair can be recovered solely based on the available text, we did not use the im- ages at all during corpus collection.
Table 2 reports some key statistics about the col- lected corpus, and Figure 2 shows the distributions of sentence lengths for both our source hypotheses and our newly collected premises. We observed that while premise sentences varied considerably in length, hypothesis sentences tended to be as
3 We additionally include about 4k sentence pairs from a pilot study in which the premise sentences were instead drawn from the VisualGenome corpus (under construction; visualgenome.org). These examples appear only in the training set, and have pair identiï¬ers preï¬xed with vg in our corpus. | 1508.05326#14 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 15 | Data set sizes: Training pairs Development pairs Test pairs 550,152 10,000 10,000 Sentence length: Premise mean token count Hypothesis mean token count 14.1 8.3 Parser output: Premise âSâ-rooted parses Hypothesis âSâ-rooted parses Distinct words (ignoring case) 74.0% 88.9% 37,026
Table 2: Key statistics for the raw sentence pairs in SNLI. Since the two halves of each pair were collected separately, we report some statistics for both.
short as possible while still providing enough in- formation to yield a clear judgment, clustering at around seven words. We also observed that the bulk of the sentences from both sources were syn- tactically complete rather than fragments, and the frequency with which the parser produces a parse rooted with an âSâ (sentence) node attests to this.
# 2.2 Data validation | 1508.05326#15 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 16 | # 2.2 Data validation
In order to measure the quality of our corpus, and in order to construct maximally useful test- ing and development sets, we performed an addi- tional round of validation for about 10% of our data. This validation phase followed the same basic form as the Mechanical Turk labeling task used to label the SICK entailment data: we pre- sented workers with pairs of sentences in batches of ï¬ve, and asked them to choose a single label for each pair. We supplied each pair to four an- notators, yielding ï¬ve labels per pair including the label used by the original author. The instructions were similar to the instructions for initial data col- lection shown in Figure 1, and linked to a similar FAQ. Though we initially used a very restrictive qualiï¬cation (based on past approval rate) to se- lect workers for the validation task, we nonethe- less discovered (and deleted) some instances of random guessing in an early batch of work, and subsequently instituted a fully closed qualiï¬cation restricted to about 30 trusted workers.
For each pair that we validated, we assigned a gold label. If any one of the three labels was cho- sen by at least three of the ï¬ve annotators, it was | 1508.05326#16 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 17 | For each pair that we validated, we assigned a gold label. If any one of the three labels was cho- sen by at least three of the ï¬ve annotators, it was
ââ Premise â Hypothesis 100,000 90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 Number of sentences 0 5 10 15 20 25 30 35 40 Sentence length (tokens)
Figure 2: The distribution of sentence length.
chosen as the gold label. If there was no such con- sensus, which occurred in about 2% of cases, we assigned the placeholder label â-â. While these un- labeled examples are included in the corpus dis- tribution, they are unlikely to be helpful for the standard NLI classiï¬cation task, and we do not in- clude them in either training or evaluation in the experiments that we discuss in this paper. | 1508.05326#17 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 18 | The results of this validation process are sum- marized in Table 3. Nearly all of the examples received a majority label, indicating broad con- sensus about the nature of the data and categories. The gold-labeled examples are very nearly evenly distributed across the three labels. The Fleiss κ scores (computed over every example with a full ï¬ve annotations) are likely to be conservative given our large and unevenly distributed pool of annotators, but they still provide insights about the levels of disagreement across the three semantic classes. This disagreement likely reï¬ects not just the limitations of large crowdsourcing efforts but also the uncertainty inherent in naturalistic NLI. Regardless, the overall rate of agreement is ex- tremely high, suggesting that the corpus is sufï¬- ciently high quality to pose a challenging but real- istic machine learning task.
# 2.3 The distributed corpus
Table 1 shows a set of randomly chosen validated examples from the development set with their la- bels. Qualitatively, we ï¬nd the data that we col- lected draws fairly extensively on commonsense knowledge, and that hypothesis and premise sen- tences often differ structurally in signiï¬cant ways, suggesting that there is room for improvement be- yond superï¬cial word alignment models. We also ï¬nd the sentences that we collected to be largely | 1508.05326#18 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 19 | General: Validated pairs Pairs w/ unanimous gold label 56,951 58.3% Individual annotator label agreement: Individual label = gold label 89.0% Individual label = authorâs label 85.8% Gold label/authorâs label agreement: Gold label = authorâs label 91.2% Gold label 4 authorâs label 6.8% No gold label (no 3 labels match) 2.0% Fleiss «: contradiction 0.77 entailment 0.72 neutral 0.60 Overall 0.70
Table 3: Statistics for the validated pairs. The au- thorâs label is the label used by the worker who wrote the premise to create the sentence pair. A gold label reï¬ects a consensus of three votes from among the author and the four annotators.
ï¬uent, correctly spelled English, with a mix of full sentences and caption-style noun phrase frag- ments, though punctuation and capitalization are often omitted.
The corpus is available under a CreativeCom- mons Attribution-ShareAlike license, the same li- cense used for the Flickr30k source captions. It can be downloaded at: nlp.stanford.edu/projects/snli/ | 1508.05326#19 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 20 | Partition We distribute the corpus with a pre- speciï¬ed train/test/development split. The test and development sets contain 10k examples each. Each original ImageFlickr caption occurs in only one of the three sets, and all of the examples in the test and development sets have been validated.
Parses The distributed corpus includes parses produced by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003), trained on the stan- dard training set as well as on the Brown Corpus (Francis and Kucera 1979), which we found to im- prove the parse quality of the descriptive sentences and noun phrases found in the descriptions.
# 3 Our data as a platform for evaluation
The most immediate application for our corpus is in developing models for the task of NLI. In parSystem SNLI SICK RTE-3 Edit Distance Based 71.9 65.4 61.9 Classiï¬er Based 72.2 71.4 61.5 + Lexical Resources 75.0 78.8 63.6
Table 4: 2-class test accuracy for two simple baseline systems included in the Excitement Open Platform, as well as SICK and RTE results for a model making use of more sophisticated lexical resources. | 1508.05326#20 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 21 | ticular, since it is dramatically larger than any ex- isting corpus of comparable quality, we expect it to be suitable for training parameter-rich models like neural networks, which have not previously been competitive at this task. Our ability to evaluate standard classiï¬er-base NLI models, however, was limited to those which were designed to scale to SNLIâs size without modiï¬cation, so a more com- plete comparison of approaches will have to wait for future work. In this section, we explore the per- formance of three classes of models which could scale readily: (i) models from a well-known NLI system, the Excitement Open Platform; (ii) vari- ants of a strong but simple feature-based classi- ï¬er model, which makes use of both unlexicalized and lexicalized features, and (iii) distributed repre- sentation models, including a baseline model and neural network sequence models.
# 3.1 Excitement Open Platform models | 1508.05326#21 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 22 | # 3.1 Excitement Open Platform models
The ï¬rst class of models is from the Excitement Open Platform (EOP, Pad´o et al. 2014; Magnini et al. 2014)âan open source platform for RTE re- search. EOP is a tool for quickly developing NLI systems while sharing components such as com- mon lexical resources and evaluation sets. We evaluate on two algorithms included in the dis- tribution: a simple edit-distance based algorithm and a classiï¬er-based algorithm, the latter both in a bare form and augmented with EOPâs full suite of lexical resources.
Our initial goal was to better understand the dif- ï¬culty of the task of classifying SNLI corpus in- ferences, rather than necessarily the performance of a state-of-the-art RTE system. We approached this by running the same system on several data sets: our own test set, the SICK test data, and the standard RTE-3 test set (Giampiccolo et al., 2007). We report results in Table 4. Each of the models | 1508.05326#22 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 23 | was separately trained on the training set of each corpus. All models are evaluated only on 2-class entailment. To convert 3-class problems like SICK and SNLI to this setting, all instances of contradic- tion and unknown are converted to nonentailment. This yields a most-frequent-class baseline accu- racy of 66% on SNLI, and 71% on SICK. This is intended primarily to demonstrate the difï¬culty of the task, rather than necessarily the performance of a state-of-the-art RTE system. The edit dis- tance algorithm tunes the weight of the three case- insensitive edit distance operations on the train- In addition ing set, after removing stop words. to the base classiï¬er-based system distributed with the platform, we train a variant which includes in- formation from WordNet (Miller, 1995) and Verb- Ocean (Chklovski and Pantel, 2004), and makes use of features based on tree patterns and depen- dency tree skeletons (Wang and Neumann, 2007).
# 3.2 Lexicalized Classiï¬er | 1508.05326#23 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 24 | # 3.2 Lexicalized Classiï¬er
Unlike the RTE datasets, SNLIâs size supports ap- proaches which make use of rich lexicalized fea- tures. We evaluate a simple lexicalized classiï¬er to explore the ability of non-specialized models to exploit these features in lieu of more involved lan- guage understanding. Our classiï¬er implements 6 feature types; 3 unlexicalized and 3 lexicalized:
1. The BLEU score of the hypothesis with re- spect to the premise, using an n-gram length between 1 and 4.
2. The length difference between the hypothesis and the premise, as a real-valued feature. 3. The overlap between words in the premise and hypothesis, both as an absolute count and a percentage of possible overlap, and both over all words and over just nouns, verbs, ad- jectives, and adverbs.
4. An indicator for every unigram and bigram in
# the hypothesis. 5. Cross-unigrams:
for every pair of words across the premise and hypothesis which share a POS tag, an indicator feature over the two words. 6. Cross-bigrams:
for every pair of bigrams across the premise and hypothesis which share a POS tag on the second word, an in- dicator feature over the two bigrams. | 1508.05326#24 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 25 | for every pair of bigrams across the premise and hypothesis which share a POS tag on the second word, an in- dicator feature over the two bigrams.
We report results in Table 5, along with abla- tion studies for removing the cross-bigram fea- tures (leaving only the cross-unigram feature) and
System SNLI SICK Train Test Train Test Lexicalized Unigrams Only Unlexicalized 99.7 78.2 93.1 71.6 49.4 50.4 90.4 77.8 88.1 77.0 69.9 69.6
Table 5: 3-class accuracy, training on either our data or SICK, including models lacking cross- bigram features (Feature 6), and lacking all lexical features (Features 4â6). We report results both on the test set and the training set to judge overï¬tting. | 1508.05326#25 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 26 | for removing all lexicalized features. On our large corpus in particular, there is a substantial jump in accuracy from using lexicalized features, and an- other from using the very sparse cross-bigram fea- tures. The latter result suggests that there is value in letting the classiï¬er automatically learn to rec- ognize structures like explicit negations and adjec- tive modiï¬cation. A similar result was shown in Wang and Manning (2012) for bigram features in sentiment analysis.
It is surprising that the classiï¬er performs as well as it does without any notion of alignment or tree transformations. Although we expect that richer models would perform better, the results suggest that given enough data, cross bigrams with the noisy part-of-speech overlap constraint can produce an effective model.
# 3.3 Sentence embeddings and NLI | 1508.05326#26 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 27 | # 3.3 Sentence embeddings and NLI
SNLI is suitably large and diverse to make it pos- sible to train neural network models that produce distributed representations of sentence meaning. In this section, we compare the performance of three such models on the corpus. To focus specif- ically on the strengths of these models at produc- ing informative sentence representations, we use sentence embedding as an intermediate step in the NLI classiï¬cation task: each model must produce a vector representation of each of the two sen- tences without using any context from the other sentence, and the two resulting vectors are then passed to a neural network classiï¬er which pre- dicts the label for the pair. This choice allows us to focus on existing models for sentence embedding, and it allows us to evaluate the ability of those models to learn useful representations of mean- ing (which may be independently useful for sub- sequent tasks), at the cost of excluding from con3-way softmax classiï¬er 200d tanh layer 200d tanh layer 200d tanh layer 100d premise 100d hypothesis sentence model with premise input sentence model with hypothesis input | 1508.05326#27 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 29 | Our neural network classiï¬er, depicted in Fig- ure 3 (and based on a one-layer model in Bow- man et al. 2015), is simply a stack of three 200d tanh layers, with the bottom layer taking the con- catenated sentence representations as input and the top layer feeding a softmax classiï¬er, all trained jointly with the sentence embedding model itself. We test three sentence embedding models, each set to use 100d phrase and sentence embeddings. Our baseline sentence embedding model simply sums the embeddings of the words in each sen- tence. In addition, we experiment with two simple sequence embedding models: a plain RNN and an LSTM RNN (Hochreiter and Schmidhuber, 1997). The word embeddings for all of the models are initialized with the 300d reference GloVe vectors (840B token version, Pennington et al. 2014) and ï¬ne-tuned as part of training. In addition, all of the models use an additional tanh neural net- work layer to map these 300d embeddings into the lower-dimensional phrase and sentence em- bedding space. All of the models are randomly initialized using standard techniques and trained using AdaDelta (Zeiler, | 1508.05326#29 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.