doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.05250 | 7 | Reading comprehension. A data-driven approach to reading comprehension goes back to Hirschman et al. (1999), who curated a dataset of 600 real 3rdâ 6th grade reading comprehension questions. Their pattern matching baseline was subsequently im- proved by a rule-based system (Riloff and Thelen, 2000) and a logistic regression model (Ng et al., 2000). More recently, Richardson et al. (2013) cu- rated MCTest, which contains 660 stories created by crowdworkers, with 4 questions per story and 4 answer choices per question. Because many of the questions require commonsense reasoning and reasoning across multiple sentences, the dataset re- mains quite challenging, though there has been no- ticeable progress (Narasimhan and Barzilay, 2015; Sachan et al., 2015; Wang et al., 2015). Both curated datasets, although real and difï¬cult, are too small to support very expressive statistical models. | 1606.05250#7 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 7 | First, we will deï¬ne the context-dependent seman- tic parsing task. Deï¬ne w0 as the initial world state, which consists of a set of entities (beakers in ALCHEMY) and properties (location, color(s), and amount ï¬lled). The text x is a sequence of utterances x1, . . . , xL. For each utterance xi (e.g., âmixâ), we have a latent logical form zi (e.g., mix(args[1][2])). Deï¬ne the context ci = (w0, z1:iâ1) to include the initial world state w0 and the history of past logical forms z1:iâ1. Each logical form zi is executed on the context ci to produce the next state: wi = Exec(ci, zi) for each i = 1, . . . , L. Overloading notation, we write wL = Exec(w0, z), where z = (z1, . . . , zL). The learning problem is: given a set of training examples {(w0, x, wL)}, learn a mapping from the text x to logical forms z = (z1, . . . , zL) that pro- duces the correct ï¬nal | 1606.05378#7 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 8 | Some datasets focus on deeper reasoning abili- ties. Algebra word problems require understanding a story well enough to turn it into a system of equations, which can be easily solved to produce the an- swer (Kushman et al., 2014; Hosseini et al., 2014). BAbI (Weston et al., 2015), a fully synthetic RC dataset, is stratiï¬ed by different types of reasoning required to solve each task. Clark and Etzioni (2016) describe the task of solving 4th grade science exams, and stress the need to reason with world knowledge.
Open-domain question answering. The goal of open-domain QA is to answer a question from a large collection of documents. The annual eval- uations at the Text REtreival Conference (TREC) (Voorhees and Tice, 2000) led to many advances in open-domain QA, many of which were used in IBM Watson for Jeopardy! (Ferrucci et al., 2013). Recently, Yang et al. (2015) created the WikiQA dataset, which, like SQuAD, use Wikipedia pas- sages as a source of answers, but their task is sen- tence selection, while ours requires selecting a spe- ciï¬c span in the sentence. | 1606.05250#8 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05250 | 9 | Selecting the span of text that answers a question is similar to answer extraction, the ï¬nal step in the open-domain QA pipeline, methods for which in- clude bootstrapping surface patterns (Ravichandran and Hovy, 2002), using dependency trees (Shen and Klakow, 2006), and using a factor graph over mul- tiple sentences (Sun et al., 2013). One key differ- ence between our RC setting and answer extraction is that answer extraction typically exploits the fact that the answer occurs in multiple documents (Brill et al., 2002), which is more lenient than in our set- ting, where a system only has access to a single read- ing passage. | 1606.05250#9 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 9 | Dataset SCENE ALCHEMY TANGRAMS # examples 4402 4560 4989 # train 3363 3661 4189 # test words/example 56.2 1039 39.9 899 27.2 800 utterances âthen one moreâ, âhe moves backâ âmixâ, âthrow the rest outâ âundoâ, âreplace itâ, âtake it awayâ
Table 1: We collected three datasets. The number of examples, train/test split, number of tokens per example, along with interesting phenomena are shown for each dataset.
# 2.2 Datasets
We created three new context-dependent datasets, ALCHEMY, SCENE, and TANGRAMS (see Table 1 for a summary), which aim to capture a diverse set of context-dependent linguistic phenomena such as ellipsis (e.g., âmixâ in ALCHEMY), anaphora on entities (e.g., âheâ in SCENE), and anaphora on actions (e.g., ârepeat step 3â, âbring it backâ in TANGRAMS). | 1606.05378#9 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 10 | Cloze datasets. Recently, researchers have con- structed cloze datasets, in which the goal is to pre- dict the missing word (often a named entity) in a passage. Since these datasets can be automatically generated from naturally occurring data, they can be extremely large. The Childrenâs Book Test (CBT) (Hill et al., 2015), for example, involves predicting a blanked-out word of a sentence given the 20 previ- ous sentences. Hermann et al. (2015) constructed a corpus of cloze style questions by blanking out enti- ties in abstractive summaries of CNN / Daily News articles; the goal is to ï¬ll in the entity based on the original article. While the size of this dataset is im- pressive, Chen et al. (2016) showed that the dataset requires less reasoning than previously thought, and | 1606.05250#10 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 10 | For each dataset, we have a set of properties and actions. In ALCHEMY, properties are color, and amount; actions are pour, drain, and mix. In SCENE, properties are hat-color and shirt-color; actions are enter, leave, move, and trade-hats. In TANGRAMS, there is one property (shape), and actions are add, remove, and swap. In addition, we include the position property (pos) in each dataset. Each ex- ample has L = 5 utterances, each denoting some transformation of the world state.
tions and arguments (e.g., the next action is more like to be drain(beaker2) rather than drain(beaker5)). Next, we presented an AMT worker with states w0, . . . , wL and asked the worker to write a description in between each pair of successive states. | 1606.05378#10 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 11 | Paragraph 1 of 43 âSpend around 4 minutes on the following paragraph to ask 5 questions! If you can't ask 5 questions, ask 4 or 3 (worse), but do your best to ask 5. Select the answer from the paragraph by clicking on âSelect Answer', and then highlight the smallest segment of the paragraph that answers the question. Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O 2, Diatomic oxygen gas constitutes 20.8% of the Earth's atmosphere. However, monitoring of atmospheric oxygen levels show a global downward trend, because of fossil-fuel burning. Oxygen is the most abundant element by mass in the Earth's crust as part of oxide compounds such as silicon dioxide, making up almost half of the crust's mass. When asking questions, avoid using the same words/phrases as in the paragraph. Also, you are encouraged to pose hard questions. Ask a question here. Try using your own worc | 1606.05250#11 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 11 | In initial experiments, we found it rather non- to obtain interesting linguistic context- trivial often a dependence in these micro-domains: context-independent utterance such as âbeaker 2â is just clearer and not much longer than a possi- bly ambiguous âitâ. We modiï¬ed the domains to encourage more context. For example, in SCENE, we removed any visual indication of absolute posi- tion and allowed people to only move next to other people. This way, workers would say âto the left of the man in the red hatâ rather than âto position 2â.
# 3 Model
they are grounded to a world state and have rich linguis- tic context-dependence. In the context-dependent ATIS dataset (Dahl et al., 1994) used by Zettle- moyer and Collins (2009), logical forms of utter- ances depend on previous logical forms, though there is no world state and the linguistic phenom- ena is limited to nominal references. In the map navigation dataset (Chen and Mooney, 2011), used by Artzi and Zettlemoyer (2013), utterances only reference the current world state. Vlachos and Clark (2014) released a corpus of annotated di- alogues, which has interesting linguistic context- dependence, but there is no world state. | 1606.05378#11 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 12 | Figure 2: The crowd-facing web interface used to collect the dataset encourages crowdworkers to use their own words while asking questions.
concluded that performance is almost saturated.
One difference between SQuAD questions and cloze-style queries is that answers to cloze queries are single words or entities, while answers in SQuAD often include non-entities and can be much longer phrases. Another difference is that SQuAD focuses on questions whose answers are entailed by the passage, whereas the answers to cloze-style queries are merely suggested by the passage.
# 3 Dataset Collection
We collect our dataset in three stages: curating passages, crowdsourcing question-answers on those passages, and obtaining additional answers.
Passage curation. To retrieve high-quality arti- cles, we used Project Nayukiâs Wikipediaâs internal PageRanks to obtain the top 10000 articles of En- glish Wikipedia, from which we sampled 536 arti- cles uniformly at random. From each of these ar- ticles, we extracted individual paragraphs, stripping away images, ï¬gures, tables, and discarding para- graphs shorter than 500 characters. The result was 23,215 paragraphs for the 536 articles covering a wide range of topics, from musical celebrities to ab- stract concepts. We partitioned the articles randomly into a training set (80%), a development set (10%), | 1606.05250#12 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 12 | We now describe Model A, our full context- dependent semantic parsing model. First, let Z denote the set of candidate logical forms (e.g., pour(color(green),color(red))). Each logical form consists of a top-level action with arguments, which are either primitive values (green, 3, etc.), or composed via selection and superlative operations. See Table 2 for a full description. One notable feature of the logical forms is the context dependency: for example, given some context (w0, z1:4), the predicate actions[2] refers to the action of z2 and args[2][1] refers to ï¬rst argument of z2.1
Data collection. Our strategy was to automati- cally generate sequences of world states and ask Amazon Mechanical Turk (AMT) workers to de- scribe the successive transformations. Speciï¬- cally, we started with a random world state w0. For each i = 1, . . . , L, we sample a valid action and argument (e.g., pour(beaker1, beaker2)). To encourage context-dependent recently used ac- descriptions, we upweight | 1606.05378#12 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 13 | and a test set (10%).
Question-answer collection. Next, we employed crowdworkers to create questions. We used the Daemo platform (Gaikwad et al., 2015), with Ama- zon Mechanical Turk as its backend. Crowdworkers were required to have a 97% HIT acceptance rate, a minimum of 1000 HITs, and be located in the United States or Canada. Workers were asked to spend 4 minutes on every paragraph, and paid $9 per hour for the number of hours required to complete the article. The task was reviewed favorably by crowdworkers, receiving positive comments on Turkopticon.
On each paragraph, crowdworkers were tasked with asking and answering up to 5 questions on the content of that paragraph. The questions had to be entered in a text ï¬eld, and the answers had to be highlighted in the paragraph. To guide the work- ers, tasks contained a sample paragraph, and exam- ples of good and bad questions and answers on that paragraph along with the reasons they were cate- gorized as such. Additionally, crowdworkers were encouraged to ask questions in their own words, without copying word phrases from the paragraph. On the interface, this was reinforced by a reminder prompt at the beginning of every paragraph, and by disabling copy-paste functionality on the paragraph text. | 1606.05250#13 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 13 | We use the term anchored logical forms (a.k.a. derivations) to refer to logical forms augmented with alignments between sub-logical forms of zi and spans of the utterance xi. In the example above, color(green) might align with âgreen beakerâ from Figure 1; see Figure 2 for another example.
1These special predicates play the role of references in Zettlemoyer and Collins (2009). They perform context- independent parsing and resolve references, whereas we re- solve them jointly while parsing.
Property[p] Value[v] Set[s] Property[p] Set[s] Int[i] Action[a] Value[v1] Value[v2] â Root[a(v1, v2)] â Set[p(v)] â Value[argmin/argmax(s, p)] â Value[s[i]] all entities whose property p is v element in s with smallest/largest p i-th element of s top-level action applied to arguments v1, v2 | 1606.05378#13 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 14 | Additional answers collection. To get an indica- tion of human performance on SQuAD and to make our evaluation more robust, we obtained at least 2 additional answers for each question in the develop- ment and test sets. In the secondary answer gener- ation task, each crowdworker was shown only the questions along with the paragraphs of an article, and asked to select the shortest span in the para- graph that answered the question. If a question was not answerable by a span in the paragraph, workers were asked to submit the question without marking an answer. Workers were recommended a speed of 5 questions for 2 minutes, and paid at the same rate of $9 per hour for the number of hours required for the entire article. Over the development and test sets, 2.6% of questions were marked unanswerable by at least one of the additional crowdworkers.
Answer type Percentage Example Date Other Numeric Person Location Other Entity Common Noun Phrase Adjective Phrase Verb Phrase Clause Other 8.9% 19 October 1512 10.9% 12 12.9% Thomas Coke 4.4% Germany 15.3% ABC Sports 31.8% property damage 3.9% second-largest 5.5% returned to Earth 3.7% to avoid trivialization 2.7% quietly
Table 2: We automatically partition our answers into the fol- lowing categories. Our dataset consists of large number of an- swers beyond proper noun entities. | 1606.05250#14 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 14 | Table 2: Grammar that deï¬nes the space of candidate logical forms. Values include numbers, colors, as well as special tokens args[i][j] (for all i â {1, . . . , L} and j â {1, 2}) that refer to the j-th ar- gument used in the i-th logical form. Actions include the ï¬xed domain-speciï¬c set plus special tokens actions[i] (for all i â {1, . . . , L}), which refers to the i-th action in the context. | 1606.05378#14 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 15 | Table 2: We automatically partition our answers into the fol- lowing categories. Our dataset consists of large number of an- swers beyond proper noun entities.
# 4 Dataset Analysis
To understand the properties of SQuAD, we analyze the questions and answers in the development set. Speciï¬cally, we explore the (i) diversity of answer types, (ii) the difï¬culty of questions in terms of type of reasoning required to answer them, and (iii) the degree of syntactic divergence between the question and answer sentences.
Diversity in answers. We automatically catego- rize the answers as follows: We ï¬rst separate the numerical and non-numerical answers. The non-numerical answers are categorized using con- stituency parses and POS tags generated by Stan- ford CoreNLP. The proper noun phrases are further split into person, location and other entities using NER tags. In Table 2, we can see dates and other numbers make up 19.8% of the data; 32.6% of the answers are proper nouns of three different types; 31.8% are common noun phrases answers; and the remaining 15.8% are made up of adjective phrases, verb phrases, clauses and other types. | 1606.05250#15 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 15 | Derivation condition Example (Fl) 2; contains predicate r (F2) property p of 2.b; is y (F3) action z;.a is a and property p of z;.y; is y (F4) _ properties p of z;.v1 is y and pâ of z;.v2 is yâ (F5) arg zi.v,; is one of zi-1âs args (F6) action z;.a = z;-1.a (F7) properties p of z;.y, is y and pâ of z;-1-yx is yâ (F8) ti < s2 (z; contains predicate pour, âpourâ) (color of arg 1 is green, âgreenââ) (action is pour and pos of arg 2 is 2, âpour, 2â) (color of arg | is green and pos of arg 2 is 2, âfirst green, 2â) (arg reused, ââitââ) (action reused, âpourââ) (pos of arg | is 2 and pos of prev. arg 2 is 2, âthenââ) spans donât overlap | 1606.05378#15 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 16 | Reasoning required to answer questions. To get a better understanding of the reasoning required to answer the questions, we sampled 4 questions from each of the 48 articles in the development set, and then manually labeled the examples with the cate- gories shown in Table 3. The results show that all examples have some sort of lexical or syntactic divergence between the question and the answer in the passage. Note that some examples fall into more than one category. | 1606.05250#16 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 16 | Table 3: Features Ï(xi, ci, zi) for Model A: The left hand side describes conditions under which the system ï¬res indicator features, and right hand side shows sample features for each condition. For each derivation condition (F1)â(F7), we conjoin the condition with the span of the utterance that the referenced actions and arguments align to. For condition (F8), we just ï¬re the indicator by itself.
Log-linear model. We place a conditional dis- forms zi â tribution over anchored logical Z given an utterance xi and context ci = (w0, z1:iâ1), which consists of the initial world state w0 and the history of past logical forms z1:iâ1. We use a standard log-linear model:
pθ(zi | xi, ci) â exp(Ï(xi, ci, zi) · θ),
(1)
The exact features given in Table 3, references the ï¬rst two utterances of Figure 1 and the associated logical forms below: | 1606.05378#16 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 17 | Reasoning Description Example Percentage Lexical variation (synonymy) Major correspondences between the question and the answer sen- tence are synonyms. Q: What is the Rankine cycle sometimes called? Sentence: The Rankine cycle is sometimes re- ferred to as a practical Carnot cycle. 33.3% Lexical variation (world knowledge) correspondences between Major the question and the answer sen- tence require world knowledge to resolve. Q: Which governing bodies have veto power? Sen.: The European Parliament and the Council of the European Union have powers of amendment and veto during the legislative process. 9.1% Syntactic variation After the question is paraphrased into declarative form, its syntac- tic dependency structure does not match that of the answer sentence even after local modiï¬cations. Q: What Shakespeare scholar is currently on the faculty? Sen.: Current faculty include the anthropol- ogist Marshall Sahlins, ..., Shakespeare scholar David Bevington. 64.1% Multiple sentence reasoning There is anaphora, or higher-level fusion of multiple sentences is re- quired. Q: What collection does the V&A Theatre & Per- formance galleries hold? Sen.: The V&A Theatre & Performance gal- They | 1606.05250#17 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 17 | (1)
The exact features given in Table 3, references the ï¬rst two utterances of Figure 1 and the associated logical forms below:
x1 = âPour the last green beaker into beaker 2.â z1 = pour(argmin(color(green),pos),pos(2)) x2 = âThen into the ï¬rst beaker.â z2 = actions[1](args[1][2],pos(3)).
where Ï is the feature mapping and θ is the param- eter vector (to be learned). Chaining these distri- butions together, we get a distribution over a se- quence of logical forms z = (z1, . . . , zL) given the whole text x:
L po(z | x, wo) = [J po(zi | 21, (wo, 21-1). 2) ixt
Features. Our feature mapping Ï consists of two types of indicators: | 1606.05378#17 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 18 | sentences is re- quired. Q: What collection does the V&A Theatre & Per- formance galleries hold? Sen.: The V&A Theatre & Performance gal- They leries opened in March 2009. hold the UKâs biggest national collection of material about live performance. ... 13.6% Ambiguous We donât agree with the crowd- workersâ answer, or the question does not have a unique answer. Q: What is the main goal of criminal punishment? Sen.: Achieving crime control via incapacitation and deterrence is a major goal of criminal punish- ment. 6.1% | 1606.05250#18 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 18 | Features. Our feature mapping Ï consists of two types of indicators:
We describe the notation we use for Table 3, re- stricting our discussion to actions that have two or fewer arguments. Our featurization scheme, however, generalizes to an arbitrary number of arguments. Given a logical form zi, let zi.a be its action and (zi.b1, zi.b2) be its arguments (e.g., color(green)). The ï¬rst and second argu- ments are anchored over spans [s1, t1] and [s2, t2], respectively. Each argument zi.bj has a corre- sponding value zi.vj (e.g., beaker1), obtained by executing zi.bj on the context ci. Finally, let j, k â {1, 2} be indices of the arguments. For ex- ample, we would label the constituent parts of z1 (deï¬ned above) as follows:
1. For each derivation, we ï¬re features based on the structure of the logical form/spans. | 1606.05378#18 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 19 | Table 3: We manually labeled 192 examples into one or more of the above categories. Words relevant to the corresponding reasoning type are bolded, and the crowdsourced answer is underlined.
Q: What department store is thought to be the ï¬rst in the world? S: Bainbridgeâs is often cited as the worldâs ï¬rst department store.
Path: ï¬rst ï¬rst xcomp âââââthought âdelete âsubstitute amodââââstore nmodââââ cited nsubjpass ââââââ store nsubjpass ââââââBainbridgeâs detâââwhat âinsert Edit cost: 1 +2 +1=4
Figure 3: An example walking through the computation of the syntactic divergence between the question Q and answer sen- tence S.
Stratiï¬cation by syntactic divergence. We also develop an automatic method to quantify the syntac- tic divergence between a question and the sentence containing the answer. This provides another way to measure the difï¬culty of a question and to stratify the dataset, which we return to in Section 6.3. | 1606.05250#19 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 19 | 1. For each derivation, we ï¬re features based on the structure of the logical form/spans.
2. For each span s (e.g., âgreen beakerâ) form z (e.g., aligned to a sub-logical color(green)), we ï¬re features on uni- grams, bigrams, and trigrams inside s con- joined with various conditions of z.
z1.a = pour ⢠z1.b1 = argmin(color(green),pos) ⢠z1.v1 = beaker3 ⢠z1.b2 = pos(2) ⢠z1.v2 = beaker2
delete(pos(2)) delete(pos(2)) pos(2) pos(2) delete 2 delete 2 Delete the second figure actions(1] delete Repeat delete(pos(2)) delete(pos(2)) actions[1](args{1}{1]) pos(2) pos (2) 2 actions[{1] args(1)[1] delete 2 actions[1] args{1]{1] Repeat â> Repeat actions(1} delete (pos(2)) -2(p08(2)) ares(1)01) actions{2](args(2]) delete pos(2)) delete(pos(2)) (1) (2) (3) (4) | 1606.05378#19 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 20 | the anchor âï¬rstâ in the question to the wh-word âwhatâ, and the other from the anchor in the answer sentence and to the answer span âBainbridgeâsâ, are then extracted from the dependency parse trees. We measure the edit distance between these two paths, which we deï¬ne as the minimum number of dele- tions or insertions to transform one path into the other. The syntactic divergence is then deï¬ned as the minimum edit distance over all possible anchors. The histogram in Figure 4a shows that there is a wide range of syntactic divergence in our dataset. We also show a concrete example where the edit dis- tance is 0 and another where it is 6. Note that our syntactic divergence ignores lexical variation. Also, small divergence does not mean that a question is easy since there could be other candidates with sim- ilarly small divergence.
We illustrate how we measure the divergence with the example in Figure 3. We ï¬rst detect anchors (word-lemma pairs common to both the question and answer sentences); in the example, the anchor is âï¬rstâ. The two unlexicalized paths, one from
# 5 Methods
We developed a logistic regression model and com- pare its accuracy with that of three baseline methods. | 1606.05250#20 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 20 | Figure 5: Suppose we have already constructed delete(pos(2)) for âDelete the second ï¬gure.â Continuing, we shift the utterance âRepeatâ. Then, we build action[1] aligned to the word âRepeat.â followed by args[1][1], which is unaligned. Finally, we combine the two logical forms.
# 4 Left-to-right parsing
We describe a new parser suitable for learning from denotations in the context-dependent setting. Like a shift-reduce parser, we proceed left to right, but each shift operation advances an entire utter- ance rather than one word. We then sit on the utterance for a while, performing a sequence of build operations, which either combine two logi- cal forms on the stack (like the reduce operation) or generate fresh logical forms, similar to what is done in the ï¬oating parser of Pasupat and Liang (2015). | 1606.05378#20 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 21 | # 5 Methods
We developed a logistic regression model and com- pare its accuracy with that of three baseline methods.
w f=) ° N a ° Percentage bow ON uw S & 8 oo oo o ° ° 2 3 4 5 6 8 Syntactic divergence
# (a) Histogram of syntactic divergence.
Q: Who went to Wittenberg to hear Luther speak? S: Students thronged to Wittenberg to hear Luther speak. Path:
nmodââââ nmodââââ thronged went
nsubj ââââ Who nsubj ââââ Students
# Wittenberg
# Wittenberg
(b) An example of a question-answer pair with edit distance 0 be- tween the dependency paths (note that lexical variation is ignored in the computation of edit distance).
Q: What impact did the high school education movement have on the presence of skilled workers? S: During the mass high school education movement from 1910 â 1940 , there was an increase in skilled workers. Path:
school school compound ââââââ movement compound ââââââ movement nsubj ââââ have nmodââââ 1910 dobj âââ impact aclââ was detââ nsubj ââââ increase What
(c) An example of a question-answer pair with edit distance 6. | 1606.05250#21 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 21 | Our parser has two desirable properties: First, proceeding left-to-right allows us to build and score logical forms zi that depend on the world state wiâ1, which is a function of the previous log- ical forms. Note that wiâ1 is a random variable in our setting, whereas it is ï¬xed in Zettlemoyer and Collins (2009). Second, the build operation allows us the ï¬exibility to handle ellipsis (e.g., âMix.â) and anaphora on full logical forms (e.g., âDo it again.â), where thereâs not a clear alignment be- tween the words and the predicates generated.
Build: The parser creates a new logical form by combining zero or more logical forms on the stack. There are four types of build operations:
1. Create a predicate out of thin air (e.g., args[1][1] in Figure 5). This is useful when the utterance does not explicitly refer- ence the arguments or action. For example, in Figure 5, we are able to generate the log- ical form args[1][1] in the presence of ellipsis. | 1606.05378#21 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 22 | (c) An example of a question-answer pair with edit distance 6.
Figure 4: We use the edit distance between the unlexicalized dependency paths in the question and the sentence containing the answer to measure syntactic divergence.
Candidate answer generation. For all four meth- ods, rather than considering all O(L2) spans as can- didate answers, where L is the number of words in the sentence, we only use spans which are con- stituents in the constituency parse generated by Stanford CoreNLP. Ignoring punctuation and arti- cles, we ï¬nd that 77.3% of the correct answers in the development set are constituents. This places an ef- fective ceiling on the accuracy of our methods. Dur- ing training, when the correct answer of an example is not a constituent, we use the shortest constituent containing the correct answer as the target.
# 5.1 Sliding Window Baseline
For each candidate answer, we compute the uni- gram/bigram overlap between the sentence contain- ing it (excluding the candidate itself) and the ques- tion. We keep all the candidates that have the max- imal overlap. Among these, we select the best one using the sliding-window approach proposed in Richardson et al. (2013). | 1606.05250#22 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 22 | 2. Create a predicate anchored to some span of the utterance (e.g., actions[1] anchored to âRepeatâ). This allows us to do credit as- signment and capture which part of the utter- ance explains which part of the logical form.
3. Pop z from the stack o and push 2â onto o, where zâ is created by applying a rule in Ta- ble 2 to z.
4. Pop z, 2â from the stack o and push zâ onto a, where zâ is created by applying a rule in Ta- ble 2 to z, 2â (e.g., actions [1] (args [1] [1] ) by the top-level root rule).
The parser transitions through a sequence of hy- potheses. Each hypothesis is h = (i, b, Ï), where i is the index of the current utterance, where b is the number of predicates constructed on utterance xi, and Ï is a stack (list) of logical forms. The stack includes both the previous logical forms z1:iâ1 and fragments of logical forms built on the current ut- terance. When processing a particular hypothesis, the parser can choose to perform either the shift or build operation:
The build step stops once a maximum number of predicates B have been constructed or when the top-level rule is applied. | 1606.05378#22 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 23 | In addition to the basic sliding window ap- proach, we also implemented the distance-based ex- tension (Richardson et al., 2013). Whereas Richard- son et al. (2013) used the entire passage as the con- text of an answer, we used only the sentence con- taining the candidate answer for efï¬ciency.
# 5.2 Logistic Regression
In our logistic regression model, we extract several types of features for each candidate answer. We discretize each continuous feature into 10 equally- sized buckets, building a total of 180 million fea- tures, most of which are lexicalized features or de- pendency tree path features. The descriptions and examples of the features are summarized in Table 4.
The matching word and bigram frequencies as well as the root match features help the model pick the correct sentences. Length features bias the model towards picking common lengths and posi- tions for answer spans, while span word frequencies bias the model against uninformative words. Con- stituent label and span POS tag features guide the In addi- model towards the correct answer types. tion to these basic features, we resolve lexical vari- ation using lexicalized features, and syntactic varia- tion using dependency tree path features. | 1606.05250#23 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 23 | The build step stops once a maximum number of predicates B have been constructed or when the top-level rule is applied.
We have so far described the search space over logical forms. In practice, we keep a beam of the K hypotheses with the highest score under the cur- rent log-linear model.
# 5 Model Projections
Shift: The parser moves to the next utterance by incrementing the utterance index i and resetting b, which transitions a hypothesis from (i, b, Ï) to (i + 1, 0, Ï).
Model A is ambitious, as it tries to learn from scratch how each word aligns to part of the log- ical form. For example, when Model A parses âMix itâ, one derivation will correctly align âMixâ
to mix, but others will align âMixâ to args[1] [1], âMixâ to pos(2), and so on (Figure 2).
As we do not assume a seed lexicon that could map âMixâ to mix, the set of anchored logical forms is exponentially large. For example, parsing just the ï¬rst sentence of Figure 1 would generate 1,216,140 intermediate anchored logical forms. | 1606.05378#23 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 24 | The multiclass log-likelihood loss is optimized using AdaGrad with an initial learning rate of 0.1. Each update is performed on the batch of all ques- tions in a paragraph for efï¬ciency, since they share the same candidates. L2 regularization is used, with a coefï¬cient of 0.1 divided by the number of batches. The model is trained with three passes over the trainFeature Groups Description Examples Matching Word Frequencies Matching Bigram Frequencies Sum of the TF-IDF of the words that occur in both the question and the sentence containing the candidate answer. Separate features are used for the words to the left, to the right, inside the span, and in the whole sentence. Same as above, but using bigrams. We use the generalization of the TF-IDF described in Shirakawa et al. (2015). Span: [0 ⤠sum < 0.01] Left: [7.9 ⤠sum < 10.7] Span: [0 ⤠sum < 2.4] Left: [0 ⤠sum < 2.7] Root Match Lengths Span Word Frequencies Constituent Label Whether the dependency parse tree roots of the question and sentence match, whether the sentence contains the root of the dependency parse tree of the | 1606.05250#24 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 24 | How can we reduce the search space? The key is that the space of logical forms is much smaller than the space of anchored logical forms. Even though both grow exponentially, dealing directly with logical forms allows us to generate pour without the combinatorial choice over alignments. We thus deï¬ne Model B over the space of these logical forms. Figure 2 shows that the two an- chored logical forms, which are treated differently in Model A are collapsed in Model B. This dra- matically reduces the search space; parsing the ï¬rst sentence of Figure 1 generates 7,047 interme- diate logical forms.
We can go further and notice that many compositional logical forms reduce to the same ï¬at the argu- ments. For example, in Figure 2, mix(args[1] [1]) and mix(pos(2)) are equivalent to mix(beaker2). We deï¬ne Model C to be the space of these ï¬at logical forms which consist of a top-level action plus primitive arguments. Us- ing Model C, parsing the ï¬rst sentence of Figure 1 generates only 349 intermediate logical forms. | 1606.05378#24 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 25 | Word Frequencies Constituent Label Whether the dependency parse tree roots of the question and sentence match, whether the sentence contains the root of the dependency parse tree of the question, and whether the question contains the root of the dependency parse tree of the sentence. Number of words to the left, to the right, inside the span, and in the whole sentence. Sum of the TF-IDF of the words in the span, regardless of whether they appear in the question. Constituency parse tree label of the span, optionally combined with the wh-word in the question. Root Match = False Span: [1 <= num < 2] Left: [15 ⤠num < 19] Span: [5.2 ⤠sum < 6.9] Span: NP Span: NP, wh-word: âwhatâ Span POS Tags Lexicalized Sequence of the part-of-speech tags in the span, optionally combined with the wh-word in the question. Lemmas of question words combined with the lemmas of words within distance 2 to the span in the sentence based on the dependency parse trees. Separately, question word lemmas combined with answer word lemmas. Span: [NN] Span: [NN], wh-word: âwhatâ Q: | 1606.05250#25 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 25 | Note that in the context-dependent setting, the number of ï¬at logical forms (Model C) still in- creases exponentially with the number of utter- ances, but it is an overwhelming improvement over Model A. Furthermore, unlike other forms of relaxation, we are still generating logical forms that can express any denotation as before. The gains from Model B to Model C hinge on the fact that in our world, the number of denotations is much smaller than the number of logical forms.
Projecting the features. While we have deï¬ned the space over logical forms for Models B and C, we still need to deï¬ne a distribution over these spaces to to complete the picture. To do this, we propose projecting the features of the log-linear model (1). Deï¬ne Î AâB to be a map from a anchored logical form zA (e.g., mix(pos(2) ) aligned to âmixâ) to an unanchored one zB (e.g., mix(pos(2))), and deï¬ne Î BâC to be a map from zB to the ï¬at logical form zC (e.g., mix(beaker2)). | 1606.05378#25 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 26 | question word lemmas combined with answer word lemmas. Span: [NN] Span: [NN], wh-word: âwhatâ Q: âcauseâ, S: âunderâ caseâââ Q: âfallâ, A: âgravityâ Dependency Tree Paths For each word that occurs in both the question and sentence, the path in the dependency parse tree from that word in the sentence to the span, optionally combined with the path from the wh-word to the word in the question. POS tags are included in the paths. VBZ nmodââââ NN nsubj âââ VBZ advclâââ what + VBZ nmodââââNN | 1606.05250#26 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05250 | 27 | Table 4: Features used in the logistic regression model with examples for the question âWhat causes precipitation to fall?â, sentence âIn meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity.â and answer âgravityâ. Q denotes question, A denotes candidate answer, and S denotes sentence containing the candidate answer.
ing data.
# 6.2 Human Performance
# 6 Experiments
# 6.1 Model Evaluation
We use two different metrics to evaluate model accu- racy. Both metrics ignore punctuations and articles (a, an, the).
Exact match. This metric measures the percent- age of predictions that match any one of the ground truth answers exactly.
We assess human performance on SQuADâs devel- opment and test sets. Recall that each of the ques- tions in these sets has at least three answers. To eval- uate human performance, we treat the second an- swer to each question as the human prediction, and keep the other answers as ground truth answers. The resulting human performance score on the test set is 77.0% for the exact match metric, and 86.8% for F1. Mismatch occurs mostly due to inclusion/exclusion of non-essential phrases (e.g., monsoon trough ver- sus movement of the monsoon trough) rather than fundamental disagreements about the answer. | 1606.05250#27 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 27 | Concretely, Model Bâs features include indica- tor features over LF conditions in Table 3 con- joined with every n-gram of the entire utterance, as there is no alignment. This is similar to the model of Pasupat and Liang (2015). Note that most of the derivation conditions (F2)â(F7) al- ready depend on properties of the denotations of the arguments, so in Model C, we can directly rea- son over the space of ï¬at logical forms zC (e.g., mix(beaker2)) rather than explicitly comput- ing the max over more complex logical forms zB (e.g., mix(color(red))).
Expressivity. In going from Model A to Model C, we gain in computational efï¬ciency, but we lose in modeling expressivity. For example, for âsec- ond green beakerâ in Figure 1, instead of predict- ing color(green)[2], we would have to pre- dict beaker3, which is not easily explained by the words âsecond green beakerâ using the simple features in Table 3. | 1606.05378#27 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 28 | (Macro-averaged) F1 score. This metric mea- sures the average overlap between the prediction and ground truth answer. We treat the prediction and ground truth as bags of tokens, and compute their F1. We take the maximum F1 over all of the ground truth answers for a given question, and then average over all of the questions.
# 6.3 Model Performance
Table 5 shows the performance of our models along- side human performance on the v1.0 of development and test sets. The logistic regression model signiï¬- cantly outperforms the baselines, but underperforms
Exact Match F1 Dev Test Dev Test 1.1% 4.3% Random Guess Sliding Window 13.2% 12.5% 20.2% 19.7% Sliding Win. + Dist. 13.3% 13.0% 20.2% 20.0% Logistic Regression 40.0% 40.4% 51.0% 51.0% 80.3% 77.0% 90.5% 86.8% Human 1.3% 4.1%
Table 5: Performance of various methods and humans. Logis- tic regression outperforms the baselines, while there is still a signiï¬cant gap between humans. | 1606.05250#28 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 28 | At the same time, we found that simple fea- tures can actually simulate some logical forms. For example, color(green) can be explained by the feature that looks at the color property of beaker3. Nailing color(green)[2], how- ever, is not easy. Surprisingly, Model C can use a conjunction of features to express superlatives (e.g., argmax(color(red),pos)) by using one feature that places more mass on selecting ob- jects that are red and another feature that places more mass on objects that have a greater position value.
# 6 Experiments
Our experiments aim to explore the computation- expressivity tradeoff in going from Model A to Model B to Model C. We would expect that un- der the computational constraint of a ï¬nite beam size, Model A will be hurt the most, but with an inï¬nite beam, Model A should perform better. | 1606.05378#28 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 29 | Table 5: Performance of various methods and humans. Logis- tic regression outperforms the baselines, while there is still a signiï¬cant gap between humans.
F1 Train Dev 91.7% 51.0% Logistic Regression 33.9% 35.8% â Lex., â Dep. Paths 53.5% 45.4% â Lexicalized 91.4% 46.4% â Dep. Paths 91.7% 48.1% â Match. Word Freq. â Span POS Tags 91.7% 49.7% â Match. Bigram Freq. 91.7% 50.3% 91.7% 50.4% â Constituent Label 91.8% 50.5% â Lengths 91.7% 50.5% â Span Word Freq. 91.7% 50.6% â Root Match
Table 6: Performance with feature ablations. We ï¬nd that lexi- calized and dependency tree path features are most important.
humans. We note that the model is able to select the sentence containing the answer correctly with 79.3% accuracy; hence, the bulk of the difï¬culty lies in ï¬nding the exact span within the sentence. | 1606.05250#29 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 29 | Dataset Model 3-acc 3-ora 5-acc 5-ora ALCHEMY B C 0.189 0.568 0.258 0.925 0.037 0.523 0.055 0.809 SCENE B C 0.068 0.232 0.118 0.431 0.017 0.147 0.031 0.253 TANGRAMS B C 0.649 0.567 0.910 0.899 0.276 0.272 0.513 0.698
Table 4: Test set accuracy and oracle accuracy for examples containing L = 3 and L = 5 utterances. Model C surpasses Model B in both accuracy and oracle on ALCHEMY and SCENE, whereas Model B does better in TANGRAMS.
We evaluate all models on accuracy, the frac- tion of examples that a model predicts correctly. A predicted logical form z is deemed to be correct for an example (w0, x, wL) if the predicted logi- cal form z executes to the correct ï¬nal world state wL. We also measure the oracle accuracy, which is the fraction of examples where at least one z on the beam executes to wL. All experiments train for 6 iterations using AdaGrad (Duchi et al., 2010) and L1 regularization with a coefï¬cient of 0.001.
# 6.1 Real data experiments | 1606.05378#29 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 30 | Feature ablations. In order to understand the fea- tures that are responsible for the performance of the logistic regression model, we perform a feature ab- lation where we remove one group of features from our model at a time. The results, shown in Table 6, indicate that lexicalized and dependency tree path features are most important. Comparing our analy- sis to the one in Chen et al. (2016), we note that the dependency tree path features play a much bigger role in our dataset. Additionally, we note that with lexicalized features, the model signiï¬cantly overï¬ts the training set; however, we found that increasing L2 regularization hurts performance on the develop- ment set.
Performance stratiï¬ed by answer type. To gain more insight into the performance of our logistic re- gression model, we report its performance across | 1606.05250#30 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 30 | # 6.1 Real data experiments
Setup. We use a beam size of 500 within each utterance, and prune to the top 5 between utter- ances. For the ï¬rst two iterations, Models B and C train on only the ï¬rst utterance of each example (L = 1). In the remaining iterations, the models train on two utterance examples. We then evaluate on examples with L = 1, . . . , 5, which tests our models ability to extrapolate to longer texts.
Accuracy with ï¬nite beam. We compare mod- els B and C on the three real datasets for both L = 3 and L = 5 utterances (Model A was too ex- pensive to use). Table 4 shows that on 5 utterance examples, the ï¬atter Model C achieves an average accuracy of 20% higher than the more composi- tional Model B. Similarly, the average oracle accu- racy is 39% higher. This suggests that (i) the cor- rect logical form often falls off the beam for Model B due to a larger search space, and (ii) the expres- sivity of Model C is sufï¬cient in many cases. | 1606.05378#30 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 31 | Performance stratiï¬ed by answer type. To gain more insight into the performance of our logistic re- gression model, we report its performance across
Logistic Regression Human Dev F1 Dev F1 Date Other Numeric Person Location Other Entity Common Noun Phrase Adjective Phrase Verb Phrase Clause Other 72.1% 62.5% 56.2% 55.4% 52.2% 46.5% 37.9% 31.2% 34.3% 34.8% 93.9% 92.9% 95.4% 94.1% 92.6% 88.3% 86.8% 82.4% 84.5% 86.1%
Table 7: Performance stratiï¬ed by answer types. Logistic re- gression performs better on certain types of answers, namely numbers and entities. On the other hand, human performance is more uniform.
100 90 ey iJ 70 60 50 Preformance (%) 40 â Logistic Regression Dev F1 â Human Dev Fl 30 20, i) 1 2 3 4 5 6 7 8 Syntactic divergence
Figure 5: Performance stratiï¬ed by syntactic divergence of questions and sentences. The performance of logistic regres- sion degrades with increasing divergence. In contrast, human performance is stable across the full range of divergence. | 1606.05250#31 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 31 | On the other hand, Model B outperforms Model C on the TANGRAMS dataset. This happens for two reasons. The TANGRAMS dataset has the smallest search space, since all of the utterances refer to objects using position only. Addition- ally, many utterances reference logical forms that
Model C is unable to express, such as ârepeat the ï¬rst stepâ, or âadd it backâ.
Figure 6 shows how the models perform as the number of utterances per example varies. When the search space is small (fewer number of ut- terances), Model B outperforms or is competitive with Model C. However, as the search space in- creases (tighter computational constraints), Model C does increasingly better.
Overall, both models perform worse as L in- creases, since to predict the ï¬nal world state wL correctly, a model essentially needs to predict an entire sequence of logical forms z1, . . . , zL, and errors cascade. Furthermore, for larger L, the ut- terances tend to have richer context-dependence.
# 6.2 Artiï¬cial data experiments | 1606.05378#31 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 32 | the answer types explored in Table 2. The re- sults (shown in Table 7) show that the model per- forms best on dates and other numbers, categories for which there are usually only a few plausible can- didates, and most answers are single tokens. The model is challenged more on other named entities (i.e., location, person and other entities) because there are many more plausible candidates. How- ever, named entities are still relatively easy to iden- tify by their POS tag features. The model performs worst on other answer types, which together form 47.6% of the dataset. Humans have exceptional per- formance on dates, numbers and all named entities. Their performance on other answer types degrades only slightly.
Performance stratiï¬ed by syntactic divergence. As discussed in Section 4, another challenging as- pect of the dataset is the syntactic divergence be- tween the question and answer sentence. Figure 5 shows that the more divergence there is, the lower the performance of the logistic regression model. Interestingly, humans do not seem to be sensitive to syntactic divergence, suggesting that deep under- standing is not distracted by superï¬cial differences. Measuring the degree of degradation could therefore be useful in determining the extent to which a model is generalizing in the right way.
# 7 Conclusion | 1606.05250#32 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 32 | # 6.2 Artiï¬cial data experiments
Setup. Due to the large search space, running model A on real data is impractical. In order feasi- bly evaluate Model A, we constructed an artiï¬cial dataset. The worlds are created using the proce- dure described in Section 2.2. We use a simple template to generate utterances (e.g., âdrain 1 from the 2 green beakerâ).
To reduce the search space for Model A, we only allow actions (e.g., drain) to align to verbs and property values (e.g., green) to align to ad- jectives. Using these linguistic constraints pro- vides a slightly optimistic assessment of Model Aâs performance.
We train on a dataset of 500 training examples and evaluate on 500 test examples. We repeat this procedure for varying beam sizes, from 40 to 260. The model only uses features (F1) through (F3). | 1606.05378#32 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 33 | # 7 Conclusion
Towards the end goal of natural language under- standing, we introduce the Stanford Question An- swering Dataset, a large reading comprehension dataset on Wikipedia articles with crowdsourced question-answer pairs. SQuAD features a diverse range of question and answer types. The perfor- mance of our logistic regression model, with 51.0% F1, against the human F1 of 86.8% suggests ample opportunity for improvement. We have made our dataset freely available to encourage exploration of more expressive models. Since the release of our dataset, we have already seen considerable interest in building models on this dataset, and the gap be- tween our logistic regression model and human per- formance has more than halved (Wang and Jiang, 2016). We expect that the remaining gap will be harder to close, but that such efforts will result in signiï¬cant advances in reading comprehension.
# Reproducibility
All code, data, and experiments for this paper are available on the CodaLab platform: https://worksheets.codalab.org/worksheets/ 0xd53d03a48ef64b329c16b9baf0f99b0c/ .
# Acknowledgments | 1606.05250#33 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 33 | Accuracy under inï¬nite beam. Since Model A is more expressive, we would expect it to be more powerful when we have no computational con- straints. Figure 7 shows that this is indeed the case: When the beam size is greater than 250, all models attain an oracle of 1, and Model A out- performs Model B, which performs similarly to Model C. This is because the alignments provide a powerful signal for constructing the logical forms. Without alignments, Models B and C learn noisier features, and accuracy suffers accordingly.
Bootstrapping. Model A performs the best with unconstrained computation, and Model C per- forms the best with constrained computation. Is there some way to bridge the two? Even though Model C has limited expressivity, it can still learn
percent correct OO z 5 a # utterances per example
percent correct percent correct r z 5 a # utterances per example
percent correct e* Model B == Model C percent correct i # utterances 5 a per example
(a) ALCHEMY (b) SCENE (c) TANGRAMS
(c) TANGRAMS | 1606.05378#33 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 34 | # Acknowledgments
We would like to thank Durim Morina and Professor Michael Bernstein for their help in crowdsourcing the collection of our dataset, both in terms of fund- ing and technical support of the Daemo platform.
# References
J. Berant, V. Srikumar, P. Chen, A. V. Linden, B. Harding, B. Huang, P. Clark, and C. D. Manning. 2014. Mod- eling biological processes for reading comprehension. In Empirical Methods in Natural Language Process- ing (EMNLP).
E. Brill, S. Dumais, and M. Banko. 2002. An analysis of the AskMSR question-answering system. In Associa- tion for Computational Linguistics (ACL), pages 257â 264.
2016. A thorough examination of the CNN / Daily Mail read- ing comprehension task. In Association for Computa- tional Linguistics (ACL).
P. Clark and O. Etzioni. 2016. My computer is an honor student but how intelligent is it? standardized tests as a measure of AI. AI Magazine, 37(1):5â12. | 1606.05250#34 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 34 | (a) ALCHEMY (b) SCENE (c) TANGRAMS
(c) TANGRAMS
Figure 6: Test results on our three datasets as we accuracy, and the dashed line are the oracles: Model B on ALCHEMY and SCENE, but is slightly 10 als ele a ° & ° & percent correct ° 2 Model A Model B Model C Model C toA 0.2 TT 0.0 I 50 100 150 200 250 300 350 400 beam size
Figure 6: Test results on our three datasets as we vary the number of utterances. The solid lines are the accuracy, and the dashed line are the oracles: With ï¬nite beam, Model C signiï¬cantly outperforms Model B on ALCHEMY and SCENE, but is slightly worse on TANGRAMS.
Figure 8: Predicted logical forms for this text: The logical form add takes a ï¬gure and position as in- put. Model B predicts the correct logical form. Model C does not understand that âbackâ refers to position 5, and adds the cat ï¬gure to position 1. | 1606.05378#34 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 35 | J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei- Fei. 2009. ImageNet: A large-scale hierarchical im- age database. In Computer Vision and Pattern Recog- nition (CVPR), pages 248â255.
D. Ferrucci, E. Brown, J. Chu-Carroll, J. Fan, D. Gondek, A. A. Kalyanpur, A. Lally, J. W. Murdock, E. Nyberg, J. Prager, N. Schlaefer, and C. Welty. 2013. Build- ing Watson: An overview of the DeepQA project. AI Magazine, 31(3):59â79.
S. N. Gaikwad, D. Morina, R. Nistala, M. Agarwal, A. Cossette, R. Bhanu, S. Savage, V. Narwal, K. Raj- pal, J. Regino, et al. 2015. Daemo: A self-governed In Proceedings of the crowdsourcing marketplace. 28th Annual ACM Symposium on User Interface Soft- ware & Technology, pages 101â102. | 1606.05250#35 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 35 | Figure 7: Test results on our artiï¬cial dataset with varying beam sizes. The solid lines are the accura- cies, and the dashed line are the oracle accuracies. Model A is unable to learn anything with beam size < 240. However, for beam sizes larger than 240, Model A attains 100% accuracy. Model C does better than Models A and B when the beam size is small < 40, but otherwise performs compa- rably to Model B. Bootstrapping Model A using Model C parameters outperforms all of the other models and attains 100% even with smaller beams.
context argument model noise beam action 0.47 0.15 0.03 0.03 0.17 0.25 B C 0.23 0.5 0.04 0.07
Table 5: Percentage of errors for Model B and C: Model B suffers predominantly from computation constraints, while Model C suffers predominantly from a lack of expressivity.
# 6.3 Error Analysis
to associate words like âgreenâ with their corre- sponding predicate green. These should be use- ful for Model A too. | 1606.05378#35 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 36 | K. M. Hermann, T. KoËcisk´y, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. 2015. Teach- ing machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS).
F. Hill, A. Bordes, S. Chopra, and J. Weston. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representations. In International Conference on Learning Representations (ICLR). L. Hirschman, M. Light, E. Breck, and J. D. Burger. 1999. Deep read: A reading comprehension system. In Association for Computational Linguistics (ACL), pages 325â332.
M. J. Hosseini, H. Hajishirzi, O. Etzioni, and N. Kush- man. 2014. Learning to solve arithmetic word prob- In Empirical Meth- lems with verb categorization. ods in Natural Language Processing (EMNLP), pages 523â533.
N. Kushman, Y. Artzi, L. Zettlemoyer, and R. Barzilay. 2014. Learning to automatically solve algebra word problems. In Association for Computational Linguis- tics (ACL). | 1606.05250#36 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 36 | # 6.3 Error Analysis
to associate words like âgreenâ with their corre- sponding predicate green. These should be use- ful for Model A too.
To operationalize this, we ï¬rst train Model C and use the parameters to initialize model A. Then we train Model A. Figure 7 shows that although Model A and C predict different logical forms, the initialization allows Model C to A to perform well in constrained beam settings. This bootstrapping works here because Model C is a projection of Model A, and thus they share the same features.
We randomly sampled 20 incorrect predictions on 3 utterance examples from each of the three real datasets for Model B and Model C. We catego- rized each prediction error into one of the follow- (i) logical forms falling off the ing categories: beam; (ii) choosing the wrong action (e.g., map- ping âdrainâ to pour); (iii) choosing the wrong argument due to misunderstanding the description (e.g., mapping âthird beakerâ to pos(1)); (iv) choosing the wrong action or argument due to mis- understanding of context (see Figure 8); (v) noise
in the dataset. Table 5 shows the fraction of each error category.
# 7 Related Work and Discussion | 1606.05378#36 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 37 | M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of En- glish: the Penn Treebank. Computational Linguistics, 19:313â330.
K. Narasimhan and R. Barzilay. 2015. Machine compre- In Association for hension with discourse relations. Computational Linguistics (ACL).
H. T. Ng, L. H. Teo, and J. L. P. Kwan. 2000. A machine learning approach to answering questions for reading comprehension tests. In Joint SIGDAT conference on empirical methods in natural language processing and very large corpora - Volume 13, pages 124â132.
D. Ravichandran and E. Hovy. 2002. Learning surface text patterns for a question answering system. In As- sociation for Computational Linguistics (ACL), pages 41â47.
M. Richardson, C. J. Burges, and E. Renshaw. 2013. Mctest: A challenge dataset for the open-domain ma- chine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), pages 193â 203. | 1606.05250#37 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 37 | in the dataset. Table 5 shows the fraction of each error category.
# 7 Related Work and Discussion
Context-dependent semantic parsing. Utter- ances can depend on either linguistic context or world state context. Zettlemoyer and Collins (2009) developed a model that handles references to previous logical forms; Artzi and Zettlemoyer (2013) developed a model that handles references to the current world state. Our system considers both types of context, handling linguistic phenom- ena such as ellipsis and anaphora that reference both previous world states and logical forms.
Logical form generation. Traditional semantic parsers generate logical forms by aligning each part of the logical form to the utterance (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011). In general, such systems rely on a lexicon, which can be hand-engineered, extracted (Cai and Yates, 2013; Berant et al., 2013), or au- tomatically learned from annotated logical forms (Kwiatkowski et al., 2010; Chen, 2012). | 1606.05378#37 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 38 | E. Riloff and M. Thelen. 2000. A rule-based question answering system for reading comprehension tests. In ANLP/NAACL Workshop on reading comprehension tests as evaluation for computer-based language un- derstanding sytems - Volume 6, pages 13â19.
M. Sachan, A. Dubey, E. P. Xing, and M. Richardson. 2015. Learning answer-entailing structures for ma- In Association for Computa- chine comprehension. tional Linguistics (ACL).
D. Shen and D. Klakow. 2006. Exploring correlation of dependency relation paths for answer extraction. In In- ternational Conference on Computational Linguistics and Association for Computational Linguistics (COL- ING/ACL), pages 889â896.
M. Shirakawa, T. Hara, and S. Nishio. 2015. N-gram idf: A global term weighting scheme based on information distance. In World Wide Web (WWW), pages 960â970. H. Sun, N. Duan, Y. Duan, and M. Zhou. 2013. Answer extraction from passage graph for question answering. In International Joint Conference on Artiï¬cial Intelli- gence (IJCAI). | 1606.05250#38 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 38 | Recent work on learning from denotations has moved away from anchored logical forms. Pa- supat and Liang (2014) and Wang et al. (2015) proposed generating logical forms without align- ments, similar to our Model B. Yao et al. (2014) and Bordes et al. (2014) have explored predicting paths in a knowledge graph directly, which is sim- ilar to the ï¬at logical forms of Model C.
Relaxation and bootstrapping. The idea of ï¬rst training a simpler model in order to work up to a more complex one has been explored other con- texts. In the unsupervised learning of generative models, bootstrapping can help escape local op- tima and provide helpful regularization (Och and Ney, 2003; Liang et al., 2009). When it is difï¬cult to even ï¬nd one logical form that reaches the de- notation, one can use the relaxation technique of Steinhardt and Liang (2015). | 1606.05378#38 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05250 | 39 | E. M. Voorhees and D. M. Tice. 2000. Building a ques- tion answering test collection. In ACM Special Interest Group on Information Retreival (SIGIR), pages 200â 207.
Shuohang Wang and Jing Jiang. 2016. Machine compre- hension using match-lstm and answer pointer. CoRR, abs/1608.07905.
H. Wang, M. Bansal, K. Gimpel, and D. McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Association for Computational Lin- guistics (ACL).
J. Weston, A. Bordes, S. Chopra, and T. Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv.
Y. Yang, W. Yih, and C. Meek. 2015. WikiQA: A chal- lenge dataset for open-domain question answering. In Empirical Methods in Natural Language Processing (EMNLP), pages 2013â2018. | 1606.05250#39 | SQuAD: 100,000+ Questions for Machine Comprehension of Text | We present the Stanford Question Answering Dataset (SQuAD), a new reading
comprehension dataset consisting of 100,000+ questions posed by crowdworkers on
a set of Wikipedia articles, where the answer to each question is a segment of
text from the corresponding reading passage. We analyze the dataset to
understand the types of reasoning required to answer the questions, leaning
heavily on dependency and constituency trees. We build a strong logistic
regression model, which achieves an F1 score of 51.0%, a significant
improvement over a simple baseline (20%). However, human performance (86.8%) is
much higher, indicating that the dataset presents a good challenge problem for
future research.
The dataset is freely available at https://stanford-qa.com | http://arxiv.org/pdf/1606.05250 | Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang | cs.CL | To appear in Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing (EMNLP) | null | cs.CL | 20160616 | 20161011 | [] |
1606.05378 | 39 | Recall that projecting from Model A to C cre- ates a more computationally tractable model at the cost of expressivity. However, this is because Model C used a linear model. One might imag- ine that a non-linear model would be able to re- In- cuperate some of the loss of expressivity. deed, Neelakantan et al. (2016) use recurrent neu- ral networks attempt to perform logical operations.
One could go one step further and bypass log- ical forms altogether, performing all the logical reasoning in a continuous space (Bowman et al., 2014; Weston et al., 2015; Guu et al., 2015; Reed and de Freitas, 2016). This certainly avoids the combinatorial explosion of logical forms in Model A, but could also present additional optimization challenges. It would be worth exploring this av- enue to completely understand the computation- expressivity tradeoff.
# Reproducibility
Our available worksheets.codalab.org/worksheets/ 0xad3fc9f52f514e849b282a105b1e3f02/.
# Acknowledgments
We thank the anonymous reviewers for their con- structive feedback. The third author is supported by a Microsoft Research Faculty Fellowship.
# References | 1606.05378#39 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 40 | # Acknowledgments
We thank the anonymous reviewers for their con- structive feedback. The third author is supported by a Microsoft Research Faculty Fellowship.
# References
Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instruc- tions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49â62.
J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer In Empirical Methods in Natural Language pairs. Processing (EMNLP).
A. Bordes, S. Chopra, and J. Weston. 2014. Ques- tion answering with subgraph embeddings. In Em- pirical Methods in Natural Language Processing (EMNLP).
S. R. Bowman, C. Potts, and C. D. Manning. 2014. Can recursive neural tensor networks learn logical reasoning? In International Conference on Learn- ing Representations (ICLR).
Q. Cai and A. Yates. 2013. Large-scale semantic pars- ing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). | 1606.05378#40 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 41 | Q. Cai and A. Yates. 2013. Large-scale semantic pars- ing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL).
D. L. Chen and R. J. Mooney. 2011. Learning to in- terpret natural language navigation instructions from observations. In Association for the Advancement of Artiï¬cial Intelligence (AAAI), pages 859â865.
D. L. Chen. 2012. Fast online lexicon learning for In Association for grounded language acquisition. Computational Linguistics (ACL).
J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the worldâs re- sponse. In Computational Natural Language Learn- ing (CoNLL), pages 18â27.
D. A. Dahl, M. Bates, M. Brown, W. Fisher, K. Hunicke-Smith, D. Pallett, C. Pao, A. Rudnicky, and E. Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Workshop on Hu- man Language Technology, pages 43â48.
J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive sub- gradient methods for online learning and stochastic In Conference on Learning Theory optimization. (COLT). | 1606.05378#41 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 42 | 2015. Travers- In Em- ing knowledge graphs in vector space. pirical Methods in Natural Language Processing (EMNLP).
T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order uni- ï¬cation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223â1233.
T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and 2011. Lexical generalization in M. Steedman. In CCG grammar induction for semantic parsing. Empirical Methods in Natural Language Processing (EMNLP), pages 1512â1523.
P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and In- ternational Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 91â99.
P. Liang. 2013. Lambda dependency-based composi- tional semantics. arXiv.
A. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with In International Conference on gradient descent. Learning Representations (ICLR). | 1606.05378#42 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 43 | F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computa- tional Linguistics, 29:19â51.
P. Pasupat and P. Liang. 2014. Zero-shot entity extrac- tion from web pages. In Association for Computa- tional Linguistics (ACL).
P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).
S. Reed and N. de Freitas. 2016. Neural programmer- interpreters. In International Conference on Learn- ing Representations (ICLR).
J. Steinhardt and P. Liang. 2015. Learning with re- laxed supervision. In Advances in Neural Informa- tion Processing Systems (NIPS).
A. Vlachos and S. Clark. 2014. A new corpus and imitation learning framework for context-dependent semantic parsing. Transactions of the Association for Computational Linguistics (TACL), 2:547â559.
Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Com- putational Linguistics (ACL). | 1606.05378#43 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 44 | Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Com- putational Linguistics (ACL).
J. Weston, A. Bordes, S. Chopra, and T. Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv.
2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960â967.
X. Yao, J. Berant, and B. Van-Durme. 2014. Freebase QA: Information extraction or semantic parsing. In Workshop on Semantic parsing.
Learning to parse database queries using inductive logic pro- In Association for the Advancement of gramming. Artiï¬cial Intelligence (AAAI), pages 1050â1055.
L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classiï¬ca- tion with probabilistic categorial grammars. In Un- certainty in Artiï¬cial Intelligence (UAI), pages 658â 666. | 1606.05378#44 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.05378 | 45 | L. S. Zettlemoyer and M. Collins. 2007. Online learn- ing of relaxed CCG grammars for parsing to log- In Empirical Methods in Natural Lan- ical form. guage Processing and Computational Natural Lan- guage Learning (EMNLP/CoNLL), pages 678â687.
L. S. Zettlemoyer and M. Collins. 2009. Learning context-dependent mappings from sentences to logi- cal form. In Association for Computational Linguis- tics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). | 1606.05378#45 | Simpler Context-Dependent Logical Forms via Model Projections | We consider the task of learning a context-dependent mapping from utterances
to denotations. With only denotations at training time, we must search over a
combinatorially large space of logical forms, which is even larger with
context-dependent utterances. To cope with this challenge, we perform
successive projections of the full model onto simpler models that operate over
equivalence classes of logical forms. Though less expressive, we find that
these simpler models are much faster and can be surprisingly effective.
Moreover, they can be used to bootstrap the full model. Finally, we collected
three new context-dependent semantic parsing datasets, and develop a new
left-to-right parser. | http://arxiv.org/pdf/1606.05378 | Reginald Long, Panupong Pasupat, Percy Liang | cs.CL, I.2.6; I.2.7 | 10 pages, ACL 2016 | null | cs.CL | 20160616 | 20160616 | [] |
1606.04671 | 0 | 2 2 0 2
t c O 2 2 ] G L . s c [
4 v 1 7 6 4 0 . 6 0 6 1 : v i X r a
# Progressive Neural Networks
Andrei A. Rusu*, Neil C. Rabinowitz*, Guillaume Desjardins*, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell * These authors contributed equally to this work
# Google DeepMind London, UK
{andreirusu, ncr, gdesjardins, soyer, kirkpatrick, korayk, razp, raia}@google.com
# Abstract
Learning to solve complex sequences of tasksâwhile both leveraging transfer and avoiding catastrophic forgettingâremains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and ï¬netuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
# Introduction | 1606.04671#0 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 1 | ABSTRACT Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identiï¬cation, ques- tion answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyra- mid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of diï¬erent pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can signiï¬cantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models.
# CCS Concepts â¢Information systems â Retrieval models and rank- ing;
# Keywords Deep Matching Models, Ranking Models, Convolutional Neu- ral Networks
# 1. INTRODUCTION | 1606.04648#1 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 1 | # Introduction
Finetuning remains the method of choice for transfer learning with neural networks: a model is pretrained on a source domain (where data is often abundant), the output layers of the model are adapted to the target domain, and the network is ï¬netuned via backpropagation. This approach was pioneered in [7] by transferring knowledge from a generative to a discriminative model, and has since been generalized with great success [11]. Unfortunately, the approach has drawbacks which make it unsuitable for transferring across multiple tasks: if we wish to leverage knowledge acquired over a sequence of experiences, which model should we use to initialize subsequent models? This seems to require not only a learning method that can support transfer learning without catastrophic forgetting, but also foreknowledge of task similarity. Furthermore, while ï¬netuning may allow us to recover expert performance in the target domain, it is a destructive process which discards the previously learned function. One could copy each model before ï¬netuning to explicitly remember all previous tasks, but the issue of selecting a proper initialization remains. While distillation [8] offers one potential solution to multitask learning [17], it requires a reservoir of persistent training data for all tasks, an assumption which may not always hold. | 1606.04671#1 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 2 | # Keywords Deep Matching Models, Ranking Models, Convolutional Neu- ral Networks
# 1. INTRODUCTION
Many text based applications, such as paraphrase iden- tiï¬cation, question answering, and ad-hoc retrieval, can be formalized as a matching task [5]. Recently, a variety of deep neural models have been proposed to solve such kind of text matching tasks. However, most proposed deep matching models were only evaluated on the natural language pro- cessing tasks such as paraphrase identiï¬cation and question answering [10, 7]. Few deep model has been tested on the ad-hoc retrieval task. Even the models proposed for the Web search tasks, including DSSM [3] and CDSSM [9], were
only tested on the <query, doc title> pairs which are not a typical ad-hoc retrieval setting. | 1606.04648#2 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 2 | This paper introduces progressive networks, a novel model architecture with explicit support for trans- fer across sequences of tasks. While ï¬netuning incorporates prior knowledge only at initialization, progressive networks retain a pool of pretrained models throughout training, and learn lateral connec- tions from these to extract useful features for the new task. By combining previously learned features in this manner, progressive networks achieve a richer compositionality, in which prior knowledge is no longer transient and can be integrated at each layer of the feature hierarchy. Moreover, the addition of new capacity alongside pretrained networks gives these models the ï¬exibility to both reuse old computations and learn new ones. As we will show, progressive networks naturally accumulate experiences and are immune to catastrophic forgetting by design, making them an ideal springboard for tackling long-standing problems of continual or lifelong learning.
The contributions of this paper are threefold. While many of the individual ingredients used in progressive nets can be found in the literature, their combination and use in solving complex sequences | 1606.04671#2 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 3 | In this paper, we propose to study the performance of deep matching models on the ad-hoc retrieval task. We choose a recently introduced deep matching model, namely Match- Pyramid [6], which has been shown state-of-the-art perfor- mances on several text matching tasks. In MatchPyramid, local interactions between two texts are ï¬rst built based on some basic representations (e.g., word embeddings). The lo- cal interactions represented by a matching matrix is then viewed as an image, and a convolutional neural network (CNN) is employed to learn hierarchical matching patterns. Finally, the high-level matching patterns are fed into a multi- layer perceptron to produce the matching score between the two texts. The model is shown to be able to capture dif- ferent levels of text matching patterns, such as n-grams and un-ordered n-terms, to improve the matching performance. When we apply the MatchPyramid model on the ad-hoc retrieval task, we conducted extensive experiments to study the impact of diï¬erent kernel sizes, pooling sizes and in- teraction functions on the retrieval performance. We ï¬nd that three key settings are | 1606.04648#3 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 3 | The contributions of this paper are threefold. While many of the individual ingredients used in progressive nets can be found in the literature, their combination and use in solving complex sequences
of tasks is novel. Second, we extensively evaluate the model in complex reinforcement learning domains. In the process, we also evaluate alternative approaches to transfer (such as ï¬netuning) within the RL domain. In particular, we show that progressive networks provide comparable (if not slightly better) transfer performance to traditional ï¬netuning, but without the destructive consequences. Finally, we develop a novel analysis based on Fisher Information and perturbation which allows us to analyse in detail how and where transfer occurs across tasks.
# 2 Progressive Networks
Continual learning is a long-standing goal of machine learning, where agents not only learn (and remember) a series of tasks experienced in sequence, but also have the ability to transfer knowledge from previous tasks to improve convergence speed [20]. Progressive networks integrate these desiderata directly into the model architecture: catastrophic forgetting is prevented by instantiating a new neural network (a column) for each task being solved, while transfer is enabled via lateral connections to features of previously learned columns. The scalability of this approach is addressed at the end of this section. | 1606.04671#3 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 4 | the impact of diï¬erent kernel sizes, pooling sizes and in- teraction functions on the retrieval performance. We ï¬nd that three key settings are helpful for the ad-hoc retrieval task, i.e. pooling by paragraph length in document, a good similarity function which can diï¬erentiate exact matching signals from semantic matching signals, and a relative small kernel size. Finally, we show that the MatchPyramid mod- els can signiï¬cantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, e.g. BM25 and language models. The recent proposed deep matching models cannot well ï¬t the ad-hoc retrieval task right now, and more investigation need to be conducted in this direc- tion. | 1606.04648#4 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 4 | A progressive network starts with a single column: a deep neural network having L layers with hidden activations h(1) i â Rni, with ni the number of units at layer i ⤠L, and parameters Î(1) trained to convergence. When switching to a second task, the parameters Î(1) are âfrozenâ and a new column with parameters Î(2) is instantiated (with random initialization), where layer h(2) receives input from both h(2)
i = ¢ (WHAM, sola?) Ww j<k
where W (k) â RniÃnj are the lateral connections from layer i â 1 of column j, to layer i of column k and h0 is the network input. f is an element-wise non-linearity: we use f (x) = max(0, x) for all intermediate layers. A progressive network with K = 3 is shown in Figure 1.
output output output, input
Figure 1: Depiction of a three column progressive network. The ï¬rst two columns on the left (dashed arrows) were trained on task 1 and 2 respectively. The grey box labelled a represent the adapter layers (see text). A third column is added for the ï¬nal task having access to all previously learned features. | 1606.04671#4 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 5 | # 2. MATCHPYRAMID
The MatchPyramid model (Figure 1) has three parts, Match- ing Matrix, Hierarchical Convolution and Matching Score Aggregation.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Neu-IR â16 SIGIR Workshop on Neural Information Retrieval July 21, 2016, Pisa, Italy © 2022 Copyright held by the owner/author(s).
# 2.1 Matching Matrix
In order to keep both the word-level similarity and the matching structures, MatchPyramid introduces the Match- ing Matrix structure. Matching Matrix is a two-dimension structure where each element Mij denotes the similarity be- tween the i-th word wi in the ï¬rst piece of text and the j-th word vj in the second piece of text.
ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235
Mij = wi â vj, (1) | 1606.04648#5 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 5 | These modelling decisions are informed by our desire to: (1) solve K independent tasks at the end of training; (2) accelerate learning via transfer when possible; and (3) avoid catastrophic forgetting.
In the standard pretrain-and-ï¬netune paradigm, there is often an implicit assumption of âoverlapâ between the tasks. Finetuning is efï¬cient in this setting, as parameters need only be adjusted slightly to the target domain, and often only the top layer is retrained [23]. In contrast, we make no assumptions about the relationship between tasks, which may in practice be orthogonal or even adversarial. While the ï¬netuning stage could potentially unlearn these features, this may prove difï¬cult. Progressive networks side-step this issue by allocating a new column for each new task, whose weights are initialized randomly. Compared to the task-relevant initialization of pretraining,
1Progressive networks can also be generalized in a straightforward manner to have arbitrary network width per column/layer, to accommodate varying degrees of task difï¬culty, or to compile lateral connections from multiple, independent networks in an ensemble setting. Biases are omitted for clarity.
2 | 1606.04671#5 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 6 | ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235
Mij = wi â vj, (1)
i ae Oo a O More 2D-Convoluton \/ = ~~ Match and Poolin yD OQ Oâ Matching s O a Score aeeeee O° YO Oo iia Layer-0 Matching Matrix Layer-1 2D-Convolution Layer-2 2D-Pooling Layer-n MLP
Figure 1: Model structure of MatchPyramid.
where â stands for a general operation to obtain the simi- larity.
We define four kinds of similarity functions based on words w; and v;, or their embeddings a and 6}.
# 2.4 Training
We use pairwise ranking loss in the training phase. Given a triple (q, d+, dâ) where the matching score of (q, d+) should be higher than that of (q, dâ), the loss function is deï¬ne as:
Indicator Function produces either 1 or 0 to indicate whether two words are identical.
L(q, d+, dâ; Î) = max(0, 1 â S(q, d+) + S(q, dâ)).
Mij = I{wi=vj } = 1, 0, if wi = vj otherwise. (2) | 1606.04648#6 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 6 | 2
columns in progressive networks are free to reuse, modify or ignore previously learned features via the lateral connections. As the lateral connections U (k:j) are only from column k to columns j < k, previous columns are not affected by the newly learned features in the forward pass. Because also the parameters {Î(j); j < k} are kept frozen (i.e. are constants for the optimizer) when training Î(k), there is no interference between tasks and hence no catastrophic forgetting.
Application to Reinforcement Learning. Although progressive networks are widely applicable, this paper focuses on their application to deep reinforcement learning. In this case, each column is trained to solve a particular Markov Decision Process (MDP): the k-th column thus deï¬nes a policy Ï(k)(a | s) taking as input a state s given by the environment, and generating probabilities over actions Ï(k)(a | s) := h(k) L (s). At each time-step, an action is sampled from this distribution and taken in the environment, yielding the subsequent state. This policy implicitly deï¬nes a stationary distribution ÏÏ(k)(s, a) over states and actions. | 1606.04671#6 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 7 | Mij = I{wi=vj } = 1, 0, if wi = vj otherwise. (2)
Cosine views angles between word vectors as the similar- ity, and it acts as a soft indicator function.
where S(q, d) denotes the predicted matching score for (q, d), and Î includes the parameters for the feed forward matching network and those for the term gating network. The opti- mization is relatively straightforward with standard back- propagation. For regularization, we ï¬nd that the early stop- ping [1] strategy works well for our model.
oT 7 a M, = 2, (3) levll 165]
# 3. EXPERIMENTS
where || - || stands for the £2 norm of a vector.
# 3.1 Dataset and Settings
Dot Product further considers the norm of word vectors, as compared to cosine.
M,; = &" 8}. (4)
Gaussian Kernel is a well-known similarity function. This similarity function is introduced in this work based on our studies. | 1606.04648#7 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 7 | Adapters. In practice, we augment the progressive network layer of Equation|2|with non-linear lat- eral connections which we call adapters. They serve both to improve initial conditioning and perform dimensionality reduction. Defining the vector of anterior features ns) = A, tee nO, tee ns] of dimensionality nls), in the case of dense layers, we replace the linear lateral connection with a single hidden layer MLP. Before feeding the lateral activations into the MLP, we multiply them by a learned scalar, initialized by a random small value. Its role is to adjust for the different scales of the different inputs. The hidden layer of the non-linear adapter is a projection onto an n; dimensional subspace. As the index k grows, this ensures that the number of parameters stemming from the lateral connections is in the same order as |O") |. Omitting bias terms, we get:
nl) <6 (Wine, + UD a(Vial=PHP)) , (2)
â Rniâ1Ãn(<k)
â Rniâ1Ãn(<k) where V (k:j) reduction is performed via 1 Ã 1 convolutions [10]. is the projection matrix. For convolutional layers, dimensionality iâ1 i | 1606.04671#7 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 8 | M,; = &" 8}. (4)
Gaussian Kernel is a well-known similarity function. This similarity function is introduced in this work based on our studies.
To conduct experiments, we use TREC collection Ro- bust04, which is a news dataset. The topics are collected from TREC Robust Track 2004. The statistics of the data collection are shown in Table 1. In this paper, the titles of the TREC topic are treat as the queries. We use the Galago Search Engine in this experiment, and both queries and documents are white-space tokenized, lower-cased, and stemmed during indexing and retrieval.
Mi = el FIP, (5)
We name MatchPyramid with indicator function as MP-Ind for short. Similarly, we use MP-Cos for cosine, MP-Dot for dot product and MP-Gau for Gaussian kernel.
# 2.2 Hierarchical Convolution
All the models share the word embeddings trained on the Wikipedia corpus under 50 dimensions. We adopt Adam algorithm [4] for model training. The learning rate is set to 10â4. All the MatchPyramid models have one convolutional layer and one dynamic pooling layer, since more convolu- tional layers and pooling layers will lead to overï¬tting due to the limited training data. | 1606.04648#8 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 8 | Limitations. Progressive networks are a stepping stone towards a full continual learning agent: they contain the necessary ingredients to learn multiple tasks, in sequence, while enabling transfer and being immune to catastrophic forgetting. A downside of the approach is the growth in number of parameters with the number of tasks. The analysis of Appendix 2 reveals that only a fraction of the new capacity is actually utilized, and that this trend increases with more columns. This suggests that growth can be addressed, e.g. by adding fewer layers or less capacity, by pruning [9], or by online compression [17] during learning. Furthermore, while progressive networks retain the ability to solve all K tasks at test time, choosing which column to use for inference requires knowledge of the task label. These issues are left as future work.
# 3 Transfer Analysis
Unlike ï¬netuning, progressive nets do not destroy the features learned on prior tasks. This enables us to study in detail which features and at which depth transfer actually occurs. We explored two related methods: an intuitive, but slow method based on a perturbation analysis, and a faster analytical method derived from the Fisher Information [2]. | 1606.04671#8 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 9 | Based on the Matching Matrix, MatchPyramid conducts hierarchical convolution to extract matching patterns. Hi- erarchical convolution consists of convolutional layers and dynamic pooling layers, which are commonly used in CNN (such as AlexNet, GoogLeNet) for image recognition tasks. Kernel sizes in each convolutional layer are the major hy- per parameters. In text processing, the size of the kernel de- termines the number of words we want to compose together. In other words, the kernel size decides the maximum size of n-term features we take into account.
0.6M 0.5M 250 2000 477
Table 1: Statistics of the TREC collection Robust04. #Vocab #Doc #Query #Retrieval doc Avg doc
# 3.2 Detailed Studies on MatchPyramid Mod- els for Ad-hoc Retrieval
Besides, pooling sizes in each pooling layer are also im- portant hyper parameters, which decide how large the area we want to take as a unit.
# 2.3 Matching Score Aggregation
After hierarchical convolution, two additional fully con- nected layers are used to aggregate the information into a single matching score. In this paper, we use 128 hidden nodes for the fully connected hidden layer and ReLU as the activation function.
# 3.2.1 Impact of Pooling Size | 1606.04648#9 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 9 | Average Perturbation Sensitivity (APS). To evaluate the degree to which source columns con- tribute to the target task, we can inject Gaussian noise at isolated points in the architecture (e.g. a given layer of a single column) and measure the impact of this perturbation on performance. A signiï¬cant drop in performance indicates that the ï¬nal prediction is heavily reliant on the feature map or layer. We ï¬nd that this method yields similar results to the faster Fisher-based method presented below. We thus relegate details and results of the perturbation analysis to the appendix.
Average Fisher Sensitivity (AFS). We can get a local approximation to the perturbation sensitivity by using the Fisher Information matrix [2]. While the Fisher matrix is typically computed with respect to the model parameters, we compute a modiï¬ed diagonal Fisher ËF of the network policy Ï
3 | 1606.04671#9 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 10 | # 3.2.1 Impact of Pooling Size
We ï¬rst study the eï¬ect of pooling size. Pooling layers are used to reduce the dimension of the feature maps, and to pick out the most important information for the latter layers. Especially in ac-hoc retrieval task, documents often contain hundreds of words, but most of them might be background words. So the pooling layers might be even more important to distill the useful information from the noisy background. In this experiment, we try diï¬erent pooling sizes and show the results in Table 2. In our experiments, the maximum
(6)
(a) Dot Product (b) Cosine (c) Gaussian Kernel
Figure 2: Choose one word from the vocabulary and measure the similarity between other words, we draw the histogram of three type of similarity functions: dot product, cosine and gaussian kernel. The arrow point the similarity between two identity word (the word we choose).
query length is 5 and we truncate document length to 500, for the computational eï¬ciency. Actually, we have tried other length of document, such as 1000 or full length, but the performance change is slight.
Table 2: Comparison of diï¬erent pooling size of MatchPyramid. (ï¬x kernel size 1 à 1) | 1606.04648#10 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 10 | 3
with respect to the normalized activations 2 at each layer Ëh(k) . For convolutional layers, we deï¬ne ËF to implicitly perform a summation over pixel locations. ËF can be interpreted as the sensitivity of the policy to small changes in the representation. We deï¬ne the diagonal matrix ËF , having elements ËF (m, m), and the derived Average Fisher Sensitivity (AFS) of feature m in layer i of column k as:
# a?
log m Olog a? FO (mn, BY Ey(s,a) â a AFS(i, k, m) = i mm) Oh,â Oh; Feâ (m,m)
where the expectation is over the joint state-action distribution p(s, a) induced by the progressive network trained on the target task. In practice, it is often useful to consider the AFS score per-layer AFS(i,k) = 30, AFS(i, k,m), i.e. summing over all features of layer i. The AFS and APS thus estimate how much the network relies on each feature or column in a layer to compute its output.
# 4 Related Literature | 1606.04671#10 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 11 | Table 2: Comparison of diï¬erent pooling size of MatchPyramid. (ï¬x kernel size 1 à 1)
Model MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind Pooling Size MAP 0.175 5 Ã 100 0.195 5 Ã 50 0.209 5 Ã 20 0.219 5 Ã 10 0.214 5 Ã 5 0.209 5 Ã 3 0.213 3 Ã 20 0.225 3 Ã 10 0.225 3 Ã 5 0.215 3 Ã 3 0.056 1 Ã 10 0.051 1 Ã 5 0.043 1 Ã 3 nDCG@20 P@20 0.254 0.320 0.266 0.343 0.279 0.363 0.301 0.387 0.295 0.380 0.300 0.370 0.285 0.357 0.302 0.387 0.301 0.385 0.301 0.377 0.073 0.082 0.078 0.072 0.053 0.066 | 1606.04648#11 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 11 | # 4 Related Literature
There exist many different paradigms for transfer and multi-task reinforcement learning, as these have long been recognized as critical challenges in AI research [15, 19, 20]. Many methods for transfer learning rely on linear and other simple models (e.g. [18]), which is a limiting factor to their applicability. Recently, there have been new methods proposed for multi-task or transfer learning with deep RL: [22, 17, 14]. In this work we present an architecture for deep reinforcement learning that in sequential task regimes that enables learning without forgetting while supporting individual feature transfer from previous learned tasks.
Pretraining and ï¬netuning was proposed in [7] and applied to transfer learning in [4, 11], generally in unsupervised-to-supervised or supervised-to-supervised settings. The actor-mimic approach [14] applied these principles to reinforcement learning, by ï¬ne-tuning a DQN multi-task network on new Atari games and showing that some responded with faster learning, while others did not. Progressive networks differ from the ï¬netuning direction substantially, since capacity is added as new tasks are learned. | 1606.04671#11 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 12 | Results show that too small or too large pooling size is not proper for this task. The best pooling size is about 3 Ã 10, which means on the query side, we use the median query length, and on the document side, we aggregate up every 50 words, close to the average length of a paragraph.
Table 3: Comparison of diï¬erent similarity function of MatchPyramid. (ï¬x kernel size 1 à 1, pooling size 3 à 10)
Model MP-Ind MP-Dot MP-Cos MP-Gau MAP 0.225 0.095 0.189 0.226 nDCG@20 P@20 0.302 0.387 0.142 0.149 0.272 0.340 0.318 0.403
this ability and leads to worse result. The performance gap between MP-Cos and MP-Gau may be related to the gap between exact matching and the semantic matching scores. The large gap in MP-Gau indicates that the model can bet- ter diï¬erentiate exact matching from semantic matching and this leads to better performance.
# 3.2.3 Impact of Kernel Size | 1606.04648#12 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 12 | Progressive nets are related to the incremental and constructive architectures proposed in neural network literature. The cascade-correlation architecture was designed to eliminate forgetting while incrementally adding and reï¬ning feature extractors [6]. Auto-encoders such as [24] use incremental feature augmentation to track concept drift, and deep architectures such as [16] have been designed that speciï¬cally support feature transfer. More recently, in [1], columns are separately trained on individual noise types, then linearly combined, and [5] use columns for image classiï¬cation. The block-modular architecture of [21] has many similarities to our approach but focuses on a visual discrimination task. The progressive net approach, in contrast, uses lateral connections to access previously learned features for deep compositionality. It can be used in any sequential learning setting but is especially valuable in RL.
# 5 Experiments
We evaluate progressive networks across three different RL domains. First, we consider synthetic versions of Pong, altered to have visual or control-level similarities. Next, we experiment broadly with random sequences of Atari games and perform a feature-level transfer analysis. Lastly, we demonstrate performance on a set of 3D maze games. Fig. 2 shows examples from selected tasks.
# 5.1 Setup | 1606.04671#12 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 13 | # 3.2.3 Impact of Kernel Size
In this section we study the eï¬ect of kernel size of the con- volutional layer in the MatchPyramid model. We conduct the experiments on diï¬erent sizes of kernel with MP-Ind and MP-Gau in this section, because MP-Dot and MP-Cos do not work well in ad-hoc retrieval task. The results are shown in Table 4.
Table 4: Comparison of diï¬erent kernel size of MatchPyramid. (ï¬x pooling size 3 à 10)
# Impact of Similarity Function
Similarity function is used to measure the similarity of two words and build the matching matrix. For paraphrase identiï¬cation task, the previous results show that indicator function is the best. For question answering, we ï¬nd that dot product function is better than others. Here we try to explore the similarity function for ad-hoc retrieval task. We compare four kinds of similarity function: indicator func- tion, dot product, cosine and gaussian kernel. | 1606.04648#13 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 13 | # 5.1 Setup
We rely on the Async Advantage Actor-Critic (A3C) framework introduced in [13]. Compared to DQN [12], the model simultaneously learns a policy and a value function for predicting expected future rewards. A3C is trained on CPU using multiple threads and has been shown to converge faster than DQN on GPU. This made it a more natural ï¬t for the large amount of sequential experiments required for this work.
2The Fisher of individual neurons (fully connected) and feature maps (convolutional layers) are computed over ÏÏ(k) (s, a). The use of a normalized representation Ëh is non-standard, but makes the scale of ËF comparable across layers and columns.
4
(a) Pong variants (b) Labyrinth games (c) Atari games | 1606.04671#13 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 14 | As we can see, MP-Ind performs quite well, indicating that exact matching signals are very useful for the ad-hoc retrieval task. By using embeddings, we can see gaussian kernel similarity function is better than others. To under- stand the underlying reasons, we ï¬rst take a look at words similarity distribution as shown in Figure 2. We can see that the exact matching score (arrow pointed) is the largest value under similarity function cosine and gaussian kernel, but not under dot product. So MP-Cos and MP-Gau keep the strength of exact matching signals, while MP-Dot loses
Model MP-Ind MP-Ind MP-Ind MP-Ind MP-Ind MP-Gau MP-Gau MP-Gau MP-Gau MP-Gau Kernel Size MAP 0.225 1 Ã 1 0.226 1 Ã 3 0.223 1 Ã 5 0.221 3 Ã 3 0.219 5 Ã 5 0.226 1 Ã 1 0.232 1 Ã 3 0.226 1 Ã 5 0.220 3 Ã 3 0.201 5 Ã 5 nDCG@20 P@20 0.302 0.387 0.294 0.382 0.297 0.382 0.295 0.379 0.295 0.378 0.318 0.403 0.327 0.411 0.326 0.409 0.312 0.400 0.301 0.371 | 1606.04648#14 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
1606.04671 | 14 | Figure 2: Samples from different task domains: (a) Pong variants include ï¬ipped, noisy, scaled, and recoloured transforms; (b) Labyrinth is a set of 3D maze games with diverse level maps and diverse positive and negative reward items; (c) Atari games offer a more challenging setting for transfer.
We report results by averaging the top 3 out of 25 jobs, each having different seeds and random hyper-parameter sampling. Performance is evaluated by measuring the area under the learning curve (average score per episode during training), rather than ï¬nal score. The transfer score is then deï¬ned as the relative performance of an architecture compared with a single column baseline, trained only on the target task (baseline 1). We present transfer score curves for selected source-target games, and summarize all such pairs in transfer matrices. Models and baselines we consider are illustrated in Figure 3. Details of the experimental setup are provided in section 3 of the Appendix.
| i s source task H , target task Hi t ~ â\U random input input input input input input â (1) Baseline 1 (2) Baseline 2. (3) Baseline 3 (4) Baseline 4 (5) Progressive Net (6) Progressive Net | |~ 2 columns 3 columns frozen | 1606.04671#14 | Progressive Neural Networks | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy. | http://arxiv.org/pdf/1606.04671 | Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | cs.LG | null | null | cs.LG | 20160615 | 20221022 | [] |
1606.04648 | 15 | We have tried two kinds of kernel sizes, 1 à n and n à n, where n â [1, 3, 5]. We expect kernel 1 à n to capture the information around the central word in the kernel window. The results show that diï¬erent kernel sizes under indica- tor similarity function perform similarly. It is reasonable if we look at the Matching Matrix generated by the indica- tor function. The Matching Matrix is very sparse and in
a kernel window there is usually one non-zero element. So the kernel size is not that important for indicator function. However, by using the gaussian kernel, we introduce seman- tic word similarity into the model, and a proper kernel size will get more information and generate a better result. We ï¬nd that MP-Gau with kernel size 1 à 3 achieves the best performance. Additionally, we expect kernel nÃn to capture the word proximity information, such as n-gram matching. Surprisingly there is no improvement in these experiments. The reason might be that the dataset is too small to learn the complex proximity patterns.
# 3.3 Comparison with Baseline Models
We further compare the MatchPyramid model with a set of baseline models. We adopt three types of baselines, in- cluding traditional models, representation-based deep match- ing models and interaction-based deep matching model. | 1606.04648#15 | A Study of MatchPyramid Models on Ad-hoc Retrieval | Deep neural networks have been successfully applied to many text matching
tasks, such as paraphrase identification, question answering, and machine
translation. Although ad-hoc retrieval can also be formalized as a text
matching task, few deep models have been tested on it. In this paper, we study
a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc
retrieval task. The MatchPyramid model employs a convolutional neural network
over the interactions between query and document to produce the matching score.
We conducted extensive experiments to study the impact of different pooling
sizes, interaction functions and kernel sizes on the retrieval performance.
Finally, we show that the MatchPyramid models can significantly outperform
several recently introduced deep matching models on the retrieval task, but
still cannot compete with the traditional retrieval models, such as BM25 and
language models. | http://arxiv.org/pdf/1606.04648 | Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng | cs.IR | Neu-IR '16 SIGIR Workshop on Neural Information Retrieval | null | cs.IR | 20160615 | 20160615 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.