doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1508.05326 | 30 | embeddings into the lower-dimensional phrase and sentence em- bedding space. All of the models are randomly initialized using standard techniques and trained using AdaDelta (Zeiler, 2012) minibatch SGD un- til performance on the development set stops im- proving. We applied L2 regularization to all mod- els, manually tuning the strength coefï¬cient λ for each, and additionally applied dropout (Srivastava et al., 2014) to the inputs and outputs of the senSentence model Train Test 100d Sum of words 100d RNN 100d LSTM RNN 79.3 73.1 84.8 75.3 72.2 77.6 | 1508.05326#30 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 32 | The results are shown in Table 6. The sum of words model performed slightly worse than the fundamentally similar lexicalized classiï¬erâ while the sum of words model can use pretrained word embeddings to better handle rare words, it lacks even the rudimentary sensitivity to word or- der that the lexicalized modelâs bigram features provide. Of the two RNN models, the LSTMâs more robust ability to learn long-term dependen- cies serves it well, giving it a substantial advan- tage over the plain RNN, and resulting in perfor- mance that is essentially equivalent to the lexical- ized classiï¬er on the test set (LSTM performance near the stopping iteration varies by up to 0.5% between evaluation steps). While the lexicalized model ï¬ts the training set almost perfectly, the gap between train and test set accuracy is relatively small for all three neural network models, suggest- ing that research into signiï¬cantly higher capacity versions of these models would be productive.
# 3.4 Analysis and discussion | 1508.05326#32 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 33 | # 3.4 Analysis and discussion
Figure 4 shows a learning curve for the LSTM and the lexicalized and unlexicalized feature-based models. It shows that the large size of the corpus is crucial to both the LSTM and the lexicalized model, and suggests that additional data would yield still better performance for both. In addi- tion, though the LSTM and the lexicalized model show similar performance when trained on the cur- rent full corpus, the somewhat steeper slope for the LSTM hints that its ability to learn arbitrar- ily structured representations of sentence mean- ing may give it an advantage over the more con- strained lexicalized model on still larger datasets. We were struck by the speed with which the lexicalized classiï¬er outperforms its unlexicalized
Unlexicalized â4~ Lexicalized LSTM ca i=) % Accuracy w oN x 36 tJ S L 3S 30 1 10 100 1,000 10,000 â 100,000 1,000,000 Training pairs used (log scale) | 1508.05326#33 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 34 | Figure 4: A learning curve showing how the baseline classiï¬ers and the LSTM perform when trained to convergence on varied amounts of train- ing data. The y-axis starts near a random-chance accuracy of 33%. The minibatch size of 64 that we used to tune the LSTM sets a lower bound on data for that model.
counterpart. With only 100 training examples, the cross-bigram classiï¬er is already performing bet- ter. Empirically, we ï¬nd that the top weighted features for the classiï¬er trained on 100 examples tend to be high precision entailments; e.g., playing â outside (most scenes are outdoors), a banana â person eating. If relatively few spurious entail- ments get high weightâas it appears is the caseâ then it makes sense that, when these do ï¬re, they boost accuracy in identifying entailments. | 1508.05326#34 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 35 | There are revealing patterns in the errors com- mon to all the models considered here. Despite the large size of the training corpus and the distri- butional information captured by GloVe initializa- tion, many lexical relationships are still misana- lyzed, leading to incorrect predictions of indepen- dent, even for pairs that are common in the train- ing corpus like beach/surf and sprinter/runner. Semantic mistakes at the phrasal level (e.g., pre- dicting contradiction for A male is placing an order in a deli/A man buying a sandwich at a deli) indicate that additional attention to composi- tional semantics would pay off. However, many of the persistent problems run deeper, to inferences that depend on world knowledge and context- speciï¬c inferences, as in the entailment pair A race car driver leaps from a burning car/A race car driver escaping danger, for which both the lex- icalized classiï¬er and the LSTM predict neutral. In other cases, the modelsâ attempts to shortcut | 1508.05326#35 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 37 | Analysis of the modelsâ predictions also yields insights into the extent to which they grapple with event and entity coreference. For the most part, the original image prompts contained a focal element that the caption writer identiï¬ed with a syntac- tic subject, following information structuring con- ventions associating subjects and topics in English (Ward and Birner, 2004). Our annotators generally followed suit, writing sentences that, while struc- turally diverse, share topic/focus (theme/rheme) structure with their premises. This promotes a coherent, situation-speciï¬c construal of each sen- tence pair. This is information that our models can easily take advantage of, but it can lead them astray. For instance, all of them stumble with the amusingly simple case A woman prepares ingre- dients for a bowl of soup/A soup bowl prepares a woman, in which prior expectations about paral- lelism are not met. Another headline example of this type is A man wearing padded arm protec- tion is being bitten by a German shepherd dog/A man bit a dog, which all the models wrongly di- agnose as entailment, though the sentences report two very different stories. A model with access to explicit information about syntactic or semantic structure should perform better on cases like these.
# 4 Transfer learning with SICK | 1508.05326#37 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 38 | # 4 Transfer learning with SICK
To the extent that successfully training a neural network model like our LSTM on SNLI forces that model to encode broadly accurate representations of English scene descriptions and to build an en- tailment classiï¬er over those relations, we should expect it to be readily possible to adapt the trained model for use on other NLI tasks. In this section, we evaluate on the SICK entailment task using a simple transfer learning method (Pratt et al., 1991) and achieve competitive results.
To perform transfer, we take the parameters of the LSTM RNN model trained on SNLI and use them to initialize a new model, which is trained from that point only on the training portion of SICK. The only newly initialized parameters are
Training sets Train Test Our data only SICK only Our data and SICK (transfer) 42.0 100.0 99.9 46.7 71.3 80.8
Table 7: LSTM 3-class accuracy on the SICK train and test sets under three training regimes. | 1508.05326#38 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 39 | Table 7: LSTM 3-class accuracy on the SICK train and test sets under three training regimes.
softmax layer parameters and the embeddings for words that appear in SICK, but not in SNLI (which are populated with GloVe embeddings as above). We use the same model hyperparameters that were used to train the original model, with the excep- tion of the L2 regularization strength, which is re-tuned. We additionally transfer the accumula- tors that are used by AdaDelta to set the learn- ing rates. This lowers the starting learning rates, and is intended to ensure that the model does not learn too quickly in its ï¬rst few epochs after trans- fer and destroy the knowledge accumulated in the pre-transfer phase of training. | 1508.05326#39 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 40 | The results are shown in Table 7. Training on SICK alone yields poor performance, and the model trained on SNLI fails when tested on SICK data, labeling more neutral examples as contradic- tions than correctly, possibly as a result of subtle differences in how the labeling task was presented. In contrast, transferring SNLI representations to SICK yields the best performance yet reported for an unaugmented neural network model, surpasses the available EOP models, and approaches both the overall state of the art at 84.6% (Lai and Hock- enmaier, 2014) and the 84% level of interannota- tor agreement, which likely represents an approx- imate performance ceiling. This suggests that the introduction of a large high-quality corpus makes it possible to train representation-learning models for sentence meaning that are competitive with the best hand-engineered models on inference tasks.
We attempted to apply this same transfer evalu- ation technique to the RTE-3 challenge, but found that the small training set (800 examples) did not allow the model to adapt to the unfamiliar genre of text used in that corpus, such that no training con- ï¬guration yielded competitive performance. Fur- ther research on effective transfer learning on small data sets with neural models might facilitate improvements here.
# 5 Conclusion | 1508.05326#40 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 41 | # 5 Conclusion
Natural languages are powerful vehicles for rea- soning, and nearly all questions about meaning- fulness in language can be reduced to questions of entailment and contradiction in context. This sug- gests that NLI is an ideal testing ground for the- ories of semantic representation, and that training for NLI tasks can provide rich domain-general se- mantic representations. To date, however, it has not been possible to fully realize this potential due to the limited nature of existing NLI resources. This paper sought to remedy this with a new, large- scale, naturalistic corpus of sentence pairs labeled for entailment, contradiction, and independence. We used this corpus to evaluate a range of models, and found that both simple lexicalized models and neural network models perform well, and that the representations learned by a neural network model on our corpus can be used to dramatically improve performance on a standard challenge dataset. We hope that SNLI presents valuable training data and a challenging testbed for the continued application of machine learning to semantic representation.
# Acknowledgments | 1508.05326#41 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 42 | # Acknowledgments
We gratefully acknowledge support from a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filter- ing of Text (DEFT) Program under Air Force Re- search Laboratory (AFRL) contract no. FA8750- 13-2-0040, the National Science Foundation un- der grant no. IIS 1159679, and the Department of the Navy, Ofï¬ce of Naval Research, under grant no. N00014-10-1-0109. Any opinions, ï¬nd- ings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ect the views of Google, Bloomberg L.P., DARPA, AFRL NSF, ONR, or the US government. We also thank our many ex- cellent Mechanical Turk contributors.
# References
Johan Bos and Katja Markert. 2005. Recognising In Proc. textual entailment with logical inference. EMNLP.
Samuel R. Bowman, Christopher Potts, and Christo- pher D. Manning. 2015. Recursive neural networks In Proc. of the 3rd can learn logical semantics. Workshop on Continuous Vector Space Models and their Compositionality. | 1508.05326#42 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 43 | Timothy Chklovski and Patrick Pantel. 2004. Verb- Ocean: Mining the web for ï¬ne-grained semantic verb relations. In Proc. EMNLP.
Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. En- In tailment, intensionality and text understanding. Proc. of the HLT-NAACL 2003 Workshop on Text Meaning.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising tectual entailment, pages 177â 190. Springer.
Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding contradic- tions in text. In Proc. ACL.
W. Nelson Francis and Henry Kucera. 1979. Brown corpus manual. Brown University.
Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. In Proc. 2000. A natural logic inference system. of the 2nd Workshop on Inference in Computational Semantics. | 1508.05326#43 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 44 | Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recog- nizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Jerrold J. Katz. 1972. Semantic Theory. Harper & Row, New York.
Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proc. ACL.
Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to seman- tics. In Proc. SemEval.
Hector J. Levesque. 2013. On our best behaviour. In Proc. AAAI.
Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open IE propositions. In Proc. CoNLL.
Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proc. of the Eighth International Conference on Computational Semantics. | 1508.05326#44 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 45 | Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proc. of the Eighth International Conference on Computational Semantics.
Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, G¨unter Neumann, Tae-Gil Noh, Sebastian Pado, Asher Stern, and Omer Levy. 2014. The Ex- citement Open Platform for textual inferences. Proc. ACL.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and tex- tual entailment. In Proc. SemEval.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A SICK cure for the evaluation of compositional distributional semantic models. In Proc. LREC.
a lexical database for english. Communications of the ACM, 38(11):39â41. | 1508.05326#45 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 46 | a lexical database for english. Communications of the ACM, 38(11):39â41.
Sebastian Pad´o, Tae-Gil Noh, Asher Stern, Rui Wang, and Roberto Zanoli. 2014. Design and realization of a modular architecture for textual entailment. Jour- nal of Natural Language Engineering.
Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Ben Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, ï¬ne- grained entailment relations, word embeddings, and style classiï¬cation. In Proc. ACL.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP.
Lorien Y Pratt, Jack Mostow, Candace A Kamm, and Ace A Kamm. 1991. Direct transfer of learned in- formation among neural networks. In Proc. AAAI.
Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proc. EMNLP. | 1508.05326#46 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 47 | Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR.
Johan van Benthem. 2008. A brief history of natural In M. Chakraborty, B. L¨owe, M. Nath Mi- logic. tra, and S. Sarukki, editors, Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal. Col- lege Publications.
2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In Proc. ACL.
Rui Wang and G¨unter Neumann. 2007. Recognizing textual entailment using sentence similarity based on dependency tree skeletons. In ACL-PASCAL Work- shop on Textual Entailment and Paraphrasing.
Information structure and non-canonical syntax. In Laurence R. Horn and Gregory Ward, editors, Handbook of Prag- matics, pages 153â174. Blackwell, Oxford.
Jason Weston, Antoine Bordes, Sumit Chopra, and 2015a. Towards AI-complete Tomas Mikolov. question answering: A set of prerequisite toy tasks. arXiv:1502.05698. | 1508.05326#47 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.05326 | 48 | Jason Weston, Sumit Chopra, and Antoine Bordes. 2015b. Memory networks. In Proc. ICLR.
Terry Winograd. 1972. Understanding natural lan- guage. Cognitive Psychology, 3(1):1â191.
Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to vi- sual denotations: New similarity metrics for seman- tic inference over event descriptions. TACL, 2:67â 78.
Matthew D. Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701. ADADELTA: an arXiv preprint | 1508.05326#48 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | [
{
"id": "1502.05698"
}
] |
1508.00305 | 0 | 5 1 0 2
g u A 3 ] L C . s c [
1 v 5 0 3 0 0 . 8 0 5 1 : v i X r a
# Compositional Semantic Parsing on Semi-Structured Tables
# Panupong Pasupat Computer Science Department Stanford University [email protected]
# Percy Liang Computer Science Department Stanford University [email protected]
# Abstract
Two important aspects of semantic pars- ing for question answering are the breadth of the knowledge source and the depth of logical compositionality. While existing work trades off one aspect for another, this paper simultaneously makes progress on both fronts through a new task: answering complex questions on semi-structured ta- bles using question-answer pairs as super- vision. The central challenge arises from two compounding factors: the broader do- main results in an open-ended set of re- lations, and the deeper compositionality results in a combinatorial explosion in the space of logical forms. We propose a logical-form driven parsing algorithm guided by strong typing constraints and show that it obtains signiï¬cant improve- ments over natural baselines. For evalua- tion, we created a new dataset of 22,033 complex questions on Wikipedia tables, which is made publicly available. | 1508.00305#0 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 1 | Year 1896 1900 1904 . . . 2004 2008 2012 City Athens Paris St. Louis USA . . . Athens Beijing London Country Nations Greece France 14 24 12 . . . 201 204 204 . . . Greece China UK
x1: âGreece held its last Summer Olympics in which year?â y1: {2004} x2: âIn which cityâs the ï¬rst time with at least 20 nations?â y2: {Paris} x3: âWhich years have the most participating countries?â y3: {2008, 2012} x4: âHow many events were in Athens, Greece?â y4: {2} x5: âHow many more participants were there in 1900 than
in the ï¬rst year?â
y5: {10}
Figure 1: Our task is to answer a highly composi- tional question from an HTML table. We learn a semantic parser from question-table-answer triples {(xi, ti, yi)}.
rigid schema over entities and relation types, thus restricting the scope of answerable questions.
# Introduction | 1508.00305#1 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 2 | rigid schema over entities and relation types, thus restricting the scope of answerable questions.
# Introduction
In semantic parsing for question answering, nat- ural language questions are converted into logi- cal forms, which can be executed on a knowl- edge source to obtain answer denotations. Early semantic parsing systems were trained to answer highly compositional questions, but the knowl- edge sources were limited to small closed-domain databases (Zelle and Mooney, 1996; Wong and Mooney, 2007; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011). More recent work sacriï¬ces compositionality in favor of using more open-ended knowledge bases such as Freebase (Cai and Yates, 2013; Berant et al., 2013; Fader et al., 2014; Reddy et al., 2014). However, even these broader knowledge sources still deï¬ne a
To simultaneously increase both the breadth of the knowledge source and the depth of logical compositionality, we propose a new task (with an associated dataset): answering a question using an HTML table as the knowledge source. Figure 1 shows several question-answer pairs and an ac- companying table, which are typical of those in our dataset. Note that the questions are logically quite complex, involving a variety of operations such as comparison (x2), superlatives (x3), aggre- gation (x4), and arithmetic (x5). | 1508.00305#2 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 3 | The HTML tables are semi-structured and not normalized. For example, a cell might contain multiple parts (e.g., âBeijing, Chinaâ or â200 kmâ). Additionally, we mandate that the train- ing and test tables are disjoint, so at test time, we will see relations (column headers; e.g., âNa- tionsâ) and entities (table cells; e.g., âSt. Louisâ)
that were not observed during training. This is in contrast to knowledge bases like Freebase, which have a global ï¬xed relation schema with normal- ized entities and relations.
Our task setting produces two main challenges. Firstly, the increased breadth in the knowledge source requires us to generate logical forms from novel tables with previously unseen relations and entities. We therefore cannot follow the typical semantic parsing strategy of constructing or learn- ing a lexicon that maps phrases to relations ahead of time. Secondly, the increased depth in com- positionality and additional logical operations ex- acerbate the exponential growth of the number of possible logical forms. | 1508.00305#3 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 4 | We trained a semantic parser for this task from question-answer pairs based on the framework il- lustrated in Figure 2. First, relations and entities from the semi-structured HTML table are encoded in a graph. Then, the system parses the question into candidate logical forms with a high-coverage grammar, reranks the candidates with a log-linear model, and then executes the highest-scoring logi- cal form to produce the answer denotation. We use beam search with pruning strategies based on type and denotation constraints to control the combina- torial explosion.
To evaluate the system, we created a new dataset, WIKITABLEQUESTIONS, consisting of 2,108 HTML tables from Wikipedia and 22,033 question-answer pairs. When tested on unseen ta- bles, the system achieves an accuracy of 37.1%, which is signiï¬cantly higher than the information retrieval baseline of 12.7% and a simple semantic parsing baseline of 24.3%.
# 2 Task | 1508.00305#4 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 5 | # 2 Task
Our task is as follows: given a table t and a ques- tion x about the table, output a list of values y that answers the question according to the table. Example inputs and outputs are shown in Fig- ure 1. The system has access to a training set D = {(xi, ti, yi)}N i=1 of questions, tables, and an- swers, but the tables in test data do not appear dur- ing training.
The only restriction on the question x is that a person must be able to answer it using just the ta- ble t. Other than that, the question can be of any type, ranging from a simple table lookup question to a more complicated one that involves various logical operations.
> @ Greece held the last ! (1) Conversion Summer Olympics in which year? @)â (2) Parsing +âââ(w) AN @) ime (3) Ranking | : (2) (4) Execution |) AlYear....].argmax(...Greece, Index) {2004} | 1508.00305#5 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 6 | Figure 2: The prediction framework: (1) the table t is deterministically converted into a knowledge graph w as shown in Figure 3; (2) with informa- tion from w, the question x is parsed into candi- date logical forms in Zx; (3) the highest-scoring candidate z â Zx is chosen; and (4) z is executed on w, yielding the answer y.
Dataset. We created a new dataset, WIK- ITABLEQUESTIONS, of question-answer pairs on HTML tables as follows. We randomly selected data tables from Wikipedia with at least 8 rows and 5 columns. We then created two Amazon Mechan- ical Turk tasks. The ï¬rst task asks workers to write trivia questions about the table. For each question, we put one of the 36 generic prompts such as âThe question should require calculationâ or âcontains the word âï¬rstâ or its synonymâ to encourage more complex utterances. Next, we submit the result- ing questions to the second task where the work- ers answer each question based on the given table. We only keep the answers that are agreed upon by at least two workers. After this ï¬ltering, approxi- mately 69% of the questions remains. | 1508.00305#6 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 7 | The ï¬nal dataset contains 22,033 examples on 2,108 tables. We set aside 20% of the tables and their associated questions as the test set and de- velop on the remaining examples. Simple pre- processing was done on the tables: We omit all non-textual contents of the tables, and if there is a merged cell spanning many rows or columns, we unmerge it and duplicate its content into each un- merged cell. Section 7.2 analyzes various aspects of the dataset and compares it to other datasets.
# 3 Approach
We now describe our semantic parsing framework for answering a given question and for training the model with question-answer pairs.
Prediction. Given a table ¢ and a question x, we predict an answer y using the framework il- lustrated in Figure We first convert the table t into a knowledge graph w (âworldâ) which en- codes different relations in the table (Section 4). Next, we generate a set of candidate logical forms Z,, by parsing the question x using the informa- tion from w (Section . Each generated logical form z ⬠Zz, is a graph query that can be exe- cuted on the knowledge graph w to get a denota- tion [z]]w. We extract a feature vector $(2, w, z) for each z ⬠2, (Section [6.2) and define a log- linear distribution over the candidates: | 1508.00305#7 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 8 | po(z | x,w) x exp{O'd(a,w,z)}, (1)
where θ is the parameter vector. Finally, we choose the logical form z with the highest model probability and execute it on w to get the answer denotation y =
[z]wTraining. Given training examples D = {(xi, ti, yi)}N i=1, we seek a parameter vector θ that maximizes the regularized log-likelihood of the correct denotation yi marginalized over logi- cal forms z. Formally, we maximize the objective function
N 1 J(9) = S> log po(yi | as, wi) â [|], , 2) i=l
where wi is deterministically generated from ti, and
pθ(y | x, w) = pθ(z | x, w). zâZx;y= (3)
2â¬Z7;y=[2]v We optimize @ using AdaGrad (Duchi et al.,| )), running 3 passes over the data. We use Ly regularization with \ = 3 x 107° obtained from cross-validation.
The following sections explain individual sys- tem components in more detail.
# 4 Knowledge graph | 1508.00305#8 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 9 | The following sections explain individual sys- tem components in more detail.
# 4 Knowledge graph
Inspired by the graph representation of knowledge bases, we preprocess the table t by deterministi- cally converting it into a knowledge graph w as illustrated in Figure 3. In the most basic form, ta- ble rows become row nodes, strings in table cells become entity nodes,1 and table columns become directed edges from the row nodes to the entity
# 1Two occurrences of the same string constitute one node.
Index ~,7 \ Year \city \ Country y~ bs 0 1896 Athens Greece Index _â. / yo | Next 1 1900 \y \cit: \Count Year city Country Paris France : Number} : Â¥ 1900.0 1900-XX-XX
Part of the knowledge graph corre- Figure 3: sponding to the table in Figure 1. Circular nodes are row nodes. We augment the graph with dif- ferent entity normalization nodes such as Number and Date (red) and additional row node relations Next and Index (blue). | 1508.00305#9 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 10 | nodes of that column. The column headers are used as edge labels for these row-entity relations. The knowledge graph representation is conve- nient for three reasons. First, we can encode dif- ferent forms of entity normalization in the graph. Some entity strings (e.g., â1900â) can be inter- preted as a number, a date, or a proper name de- pending on the context, while some other strings (e.g., â200 kmâ) have multiple parts. Instead of committing to one normalization scheme, we in- troduce edges corresponding to different normal- ization methods from the entity nodes. For exam- ple, the node 1900 will have an edge called Date to another node 1900-XX-XX of type date. Apart these normalization nodes from type checking, also aid learning by providing signals on the ap- propriate answer type. For instance, we can deï¬ne a feature that associates the phrase âhow manyâ with a logical form that says âtraverse a row-entity edge, then a Number edgeâ instead of just âtraverse a row-entity edge.â | 1508.00305#10 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 11 | The second beneï¬t of the graph representation is its ability to handle various logical phenomena via graph augmentation. For example, to answer questions of the form âWhat is the next . . . ?â or âWho came before . . . ?â, we augment each row node with an edge labeled Next pointing to the next row node, after which the questions can be answered by traversing the Next edge. In this work, we choose to add two special edges on each row node: the Next edge mentioned above and an Index edge pointing to the row index number (0, 1, 2, . . . ).
Finally, with a graph representation, we can query it directly using a logical formalism for knowledge graphs, which we turn to next.
Name Example Join City.Athens (row nodes with a City edge to Athens) Union City.(Athens L| Beijing) Intersection City.Athens M Year.Number.<./990 Reverse RlYear].City.Athens (entities where a row in City.Athens has a Year edge to) Aggregation count(City.Athens) (the number of rows with city Athens) Superlative | argmax(City.Athens, Index) (the last row with city Athens) Arithmetic sub(204,201) (= 204 â 201) Lambda Aa[Year.Date.2] (a binary: composition of two relations) | 1508.00305#11 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 12 | Table 1: The lambda DCS operations we use.
# 5 Logical forms
forms, we use As our lambda dependency-based compositional seman- tics (Liang, 2013), or lambda DCS, which we brieï¬y describe here. Each lambda DCS logical form is either a unary (denoting a list of values) or a binary (denoting a list of pairs). The most basic unaries are singletons (e.g., China represents an entity node, and 30 represents a single number), while the most basic binaries are relations (e.g., City maps rows to city entities, Next maps rows to rows, and >= maps numbers to numbers). Log- ical forms can be combined into larger ones via various operations listed in Table 1. Each opera- tion produces a unary except lambda abstraction: λx[f (x)] is a binary mapping x to f (x).
# 6 Parsing and ranking
Given the knowledge graph w, we now describe how to parse the utterance x into a set of candidate logical forms Zx
# 6.1 Parsing algorithm
We propose a new ï¬oating parser which is more ï¬exible than a standard chart parser. Both parsers recursively build up derivations and corresponding logical forms by repeatedly applying deduction rules, but the ï¬oating parser allows logical form predicates to be generated independently from the utterance. | 1508.00305#12 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 13 | Chart parser. We brieï¬y review the CKY al- gorithm for chart parsing to introduce notation. Given an utterance with tokens x1, . . . , xn, the CKY algorithm applies deduction rules of the folSemantics Anchored to the utterance match(z1) (match(s) = entity with name s) Rule Example TokenSpan â Entity Greece anchored to âGreeceâ TokenSpan â Atomic val(z1) (val(s) = interpreted value) 2012-07-XX anchored to âJuly 2012â Unanchored (ï¬oating) â
â Relation r Country (r = row-entity relation) â
â Relation λx[r.p.x] λx[Year.Date.x] (p = normalization relation) â
â Records â
â RecordFn Type.Row Index (list of all rows) (row â row index)
Table 2: Base deduction rules. Entities and atomic values (e.g., numbers, dates) are anchored to token spans, while other predicates are kept ï¬oating. (a â b represents a binary mapping b to a.)
lowing two kinds:
â &) | 1508.00305#13 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 14 | lowing two kinds:
â &)
(TokenSpan, i, j)[s] â (c, i, j)[f (s)], (c1, i, k)[z1] + (c2, k + 1, j)[z2]
(5) â (c, i, j)[f (z1, z2)].
The ï¬rst rule is a lexical rule that matches an utter- ance token span xi · · · xj (e.g., s = âNew Yorkâ) form (e.g., f (s) = and produces a logical NewYorkCity) with category c (e.g., Entity). The second rule takes two adjacent spans giv- ing rise to logical forms z1 and z2 and builds a new logical form f (z1, z2). Algorithmically, CKY stores derivations of category c covering the span xi · · · xj in a cell (c, i, j). CKY ï¬lls in the cells of increasing span lengths, and the logical forms in the top cell (ROOT, 1, n) are returned. | 1508.00305#14 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 15 | Floating parser. Chart parsing uses lexical rules (4) to generate relevant logical predicates, but in our setting of semantic parsing on tables, we do not have the luxury of starting with or inducing a full-ï¬edged lexicon. Moreover, there is a mismatch between words in the utterance and predicates in the logical form. For in- stance, consider the question âGreece held its last Summer Olympics in which year?â on the table in Figure 1 and the correct logical form R[λx[Year.Date.x]].argmax(Country.Greece, Index). While the entity Greece can be anchored to the token âGreeceâ, some logical predicates (e.g., Country) cannot be clearly anchored to a token span. We could potentially learn to anchor the logical form Country.Greece to âGreeceâ, but if the relation Country is not seen during training, such a mapping is impossible to learn from the | 1508.00305#15 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 16 | Rule Semantics Example Join + Aggregate Entity or Atomic + Values ZA China Atomic â Values C21 >=.30 (at least 30) (c ⬠{<, >, <=, >=}) Relation + Values â> Records 21.22 Relation + Records + Values Riz1]-22 Records â Records Next.2z1 Records â Records R[Next].z1 Values â Atomic a(z1) (a ⬠{count, max, min, sum, avg}) Values + ROOT ZA Country.China R[Year].Country.China Next.Country.China R[Next].Country.China count (Country.China) (events (rows) where the country is China) (years of events in China) (... before China) (... after China) (How often did China ...) Superlative Relation + RecordFn 2 Records + RecordFn â Records 8(21, 22) (s ⬠{argmax, argmin}) Relation + ValueFn Relation + Relation + ValueFn R{Az[a(z1.2)]] Ax[R{z1].z2.2] Values + ValueFn â Values 8(21, 22) Az[Nations.Number.z] argmax(Type.Row, \x[Nations.Number.z]) | 1508.00305#16 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 17 | Values + ValueFn â Values 8(21, 22) Az[Nations.Number.z] argmax(Type.Row, \x[Nations.Number.z]) argmin(City.Athens, Index) R[Az[count(City.z)]] Az[R[City].Nations.Number.z] argmax(..., R[Ax[count(City.x)]]) (row < value in Nations column) (events with the most participating nations) (first event in Athens) (city + num. of rows with that city) (city + value in Nations column) (most frequent city) Other operations ValueFn + Values + Values â> Values o(R[z1].22, R[z1].23) (o ⬠{add, sub, mul, div}) sub(R[Number].R[Nations].City.London, ...) (How many more participants were in London than ...) Entity + Entity > Values 21 U ze Chinal/France (China or France) Records + Records + Records z1 M1 22 City.BeijingCountry.China (...in Beijing, China) | 1508.00305#17 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 18 | Table 3: Compositional deduction rules. Each rule c1, . . . , ck â c takes logical forms z1, . . . , zk constructed over categories c1, . . . , ck, respectively, and produces a logical form based on the semantics.
training data. Similarly, some prominent tokens (e.g., âOlympicsâ) are irrelevant and have no predicates anchored to them.
Therefore, instead of anchoring each predicate in the logical form to tokens in the utterance via lexical rules, we propose parsing more freely. We replace the anchored cells (c, i, j) with ï¬oating cells (c, s) of category c and logical form size s. Then we apply rules of the following three kinds:
(TokenSpan, i, j)[s] â (c, 1)[f (s)], (6)
â
â (c, 1)[f ()], (7)
(c1, s1)[z1] + (c2, s2)[z2]
(8) â (c, s1 + s2 + 1)[f (z1, z2)].
(Values, 8) | 1508.00305#18 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 20 | Note that rules (6) are similar to (4) in chart parsing except that the ï¬oating cell (c, 1) only keeps track of the category and its size 1, not the span (i, j). Rules (7) allow us to construct predicates out of thin air. For example, we can construct a logical form representing a table rela- tion Country in cell (Relation, 1) using the rule â
â Relation [Country] independent of the ut- terance. Rules (8) perform composition, where the induction is on the size s of the logical form rather than the span length. The algorithm stops when the speciï¬ed maximum size is reached, after which the logical forms in cells (ROOT, s) for any
Figure 4: A derivation for the utterance âGreece held its last Summer Olympics in which year?â Only Greece is anchored to a phrase âGreeceâ; Year and other predicates are ï¬oating.
s are included in Zx. Figure 4 shows an example derivation generated by our ï¬oating parser.
The ï¬oating parser is very ï¬exible: it can skip tokens and combine logical forms in any order. This ï¬exibility might seem too unconstrained, but we can use strong typing constraints to prevent nonsensical derivations from being constructed. | 1508.00305#20 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 21 | âGreece held its last Summer Olympics in which year?â z = R[λx[Year.Number.x]].argmax(Type.Row, Index) y = {2012} (type: NUM, column: YEAR)
lex unlex (âµ âyearâ = YEAR) lex lex unlex (âµ âyearâ = YEAR)
Table 4: Example features that ï¬re for the (incor- rect) logical form z. All features are binary. (lex = lexicalized)
rules we use. We assume that all named entities will explicitly appear in the question x, so we an- chor all entity predicates (e.g., Greece) to token spans (e.g., âGreeceâ). We also anchor all numer- ical values (numbers, dates, percentages, etc.) de- tected by an NER system. In contrast, relations (e.g., Country) and operations (e.g., argmax) are kept ï¬oating since we want to learn how they are expressed in language. Connections between phrases in x and the generated relations and op- erations in z are established in the ranking model through features.
# 6.2 Features | 1508.00305#21 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 22 | # 6.2 Features
We define features 6(x,w,z) for our log-linear model to capture the relationship between the question x and the candidate z. Table [4] shows some example features from each feature type. Most features are of the form (f(x), g(z)) or (f(x), h(y)) where y = [[z]w is the denotation, and f, g, and h extract some information (e.g., identity, POS tags) from x, z, or y, respectively.
phrase-predicate: Conjunctions between n- grams f (x) from x and predicates g(z) from z. We use both lexicalized features, where all possi- ble pairs (f (x), g(z)) form distinct features, and binary unlexicalized features indicating whether f (x) and g(z) have a string match.
missing-predicate: Indicators on whether there are entities or relations mentioned in x but not in z. These features are unlexicalized.
denotation: Size and type of the denotation y = [2]. The type can be either a primitive type (e.g., NUM, DATE, ENTITY) or the name of the column containing the entity in y (e.g., CITY). | 1508.00305#22 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 23 | phrase-denotation: Conjunctions between ngrams from x and the types of y. Similar to the phrase-predicate features, we use both lexicalized and unlexicalized features.
headword-denotation: Conjunctions between the question word Q (e.g., what, who, how many) or the headword H (the ï¬rst noun after the ques- tion word) with the types of y.
# 6.3 Generation and pruning
Due to their recursive nature, the rules allow us to generate highly compositional logical forms. However, the compositionality comes at the cost of generating exponentially many logical forms, most of which are redundant (e.g., logical forms with an argmax operation on a set of size 1). We employ several methods to deal with this combi- natorial explosion:
Beam search. We compute the model proba- bility of each partial logical form based on avail- able features (i.e., features that do not depend on the ï¬nal denotation) and keep only the K = 200 highest-scoring logical forms in each cell. | 1508.00305#23 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 24 | Pruning. We prune partial logical forms that lead to invalid or redundant final logical forms. For example, we eliminate any logical form that does not type check (e.g., Beijing L! Greece), executes to an empty list (e.g., Year.Number.24), includes an aggregate or superlative on a singleton set (e.g., argmax(Year.Number.20/2, Index)), or joins two relations that are the reverses of each other (e.g., R[City].City.Beijing).
# 7 Experiments
# 7.1 Main evaluation
We evaluate the system on the development sets (three random 80:20 splits of the training data) and the test data. In both settings, the tables we test on do not appear during training.
Evaluation metrics. Our main metric is accu- racy, which is the number of examples (x, t, y) on which the system outputs the correct answer y. We also report the oracle score, which counts the number of examples where at least one generated candidate z â Zx executes to y. | 1508.00305#24 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 25 | Baselines. We compare the system to two base- lines. The ï¬rst baseline (IR), which simulates in- formation retrieval, selects an answer y among the entities in the table using a log-linear model over entities (table cells) rather than logical forms. The features are conjunctions between phrases in x and
dev test IR baseline WQ baseline Our system acc 13.4 23.6 37.0 ora 69.1 34.4 76.7 acc 12.7 24.3 37.1 ora 70.6 35.6 76.6
Table 5: Accuracy (acc) and oracle scores (ora) on the development sets (3 random splits of the training data) and the test data. | 1508.00305#25 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 26 | Table 5: Accuracy (acc) and oracle scores (ora) on the development sets (3 random splits of the training data) and the test data.
acc ora Our system 37.0 76.7 (a) Rule Ablation join only 10.6 15.7 join + count (= WQ baseline) 23.6 344 join + count + superlative 30.7 68.6 all â {N, Li} 34.8 75.1 (b) Feature Ablation all â features involving predicate 11.8 74.5 all â phrase-predicate 16.9 74.5 all â lex phrase-predicate 17.6 75.9 all â unlex phrase-predicate 34.3, 76.7 all â missing-predicate 35.9 76.7 all â features involving denotation 33.5 76.8 all â denotation 34.3 76.6 all â phrase-denotation 35.7 76.8 all â headword-denotation 36.0 76.7 (c) Anchor operations to trigger words 37.1 59.4
Table 6: Average accuracy and oracle scores on development data in various system settings.
properties of the answers y, which cover all fea- tures in our main system that do not involve the logical form. As an upper bound of this baseline, 69.1% of the development examples have the an- swer appearing as an entity in the table. | 1508.00305#26 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 27 | In the second baseline (WQ), we only allow de- duction rules that produce join and count logical forms. This rule subset has the same logical cov- erage as Berant and Liang (2014), which is de- signed to handle the WEBQUESTIONS (Berant et al., 2013) and FREE917 (Cai and Yates, 2013) datasets.
Results. Table 5 shows the results compared to the baselines. Our system gets an accuracy of 37.1% on the test data, which is signiï¬cantly higher than both baselines, while the oracle is 76.6%. The next subsections analyze the system components in more detail.
# 7.2 Dataset statistics
In this section, we analyze the breadth and depth of the WIKITABLEQUESTIONS dataset, and how the system handles them.
Number of relations. With 3,929 unique col- umn headers (relations) among 13,396 columns,
Operation Amount join (table lookup) 13.5% + join with Next +5.5% + aggregate (count, sum,max,...) + 15.0% + superlative (argmax, argmin) + 24.5% + arithmetic, 7, U + 20.5% + other phenomena + 21.0%
Table 7: The logical operations required to answer the questions in 200 random examples. | 1508.00305#27 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 28 | Table 7: The logical operations required to answer the questions in 200 random examples.
the tables in the WIKITABLEQUESTIONS dataset contain many more relations than closed-domain datasets such as Geoquery (Zelle and Mooney, 1996) and ATIS (Price, 1990). Additionally, the logical forms that execute to the correct denota- tions refer to a total of 2,056 unique column head- ers, which is greater than the number of relations in the FREE917 dataset (635 Freebase relations). Knowledge coverage. We sampled 50 exam- ples from the dataset and tried to answer them manually using Freebase. Even though Free- base contains some information extracted from Wikipedia, we can answer only 20% of the ques- indicating that WIKITABLEQUESTIONS tions, contains a broad set of facts beyond Freebase.
Logical operation coverage. The dataset cov- ers a wide range of question types and logical operations. Table 6(a) shows the drop in oracle scores when different subsets of rules are used to generate candidates logical forms. The join only subset corresponds to simple table lookup, while join + count is the WQ baseline for Freebase ques- tion answering on the WEBQUESTIONS dataset. Finally, join + count + superlative roughly corre- sponds to the coverage of the Geoquery dataset. | 1508.00305#28 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 29 | To better understand the distribution of log- ical operations in the WIKITABLEQUESTIONS dataset, we manually classiï¬ed 200 examples based on the types of operations required to an- swer the question. The statistics in Table 7 shows that while a few questions only require simple operations such as table lookup, the majority of the questions demands more advanced operations. Additionally, 21% of the examples cannot be an- swered using any logical form generated from the current deduction rules; these examples are dis- cussed in Section 7.4.
Compositionality. From each example, we compute the logical form size (number of rules applied) of the highest-scoring candidate that exe- cutes to the correct denotation. The histogram in Figure 5 shows that a signiï¬cant number of logical
1500] 1000] 500 0 frequency 203 4 5 6 7 8 9 formula size 10 11
Figure 5: Sizes of the highest-scoring correct can- didate logical forms in development examples.
with pruning 80] » so ra © 60 5 40 2) 4o| h 20f B a 0 oF 0 2% 50 75 100 0 2% 50 75 100 beam size without pruning 80 beam size
Figure 6: Accuracy (solid red) and oracle (dashed blue) scores with different beam sizes.
forms are non-trivial. | 1508.00305#29 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 30 | Figure 6: Accuracy (solid red) and oracle (dashed blue) scores with different beam sizes.
forms are non-trivial.
Beam size and pruning. Figure 6 shows the results with and without pruning on various beam sizes. Apart from saving time, pruning also pre- vents bad logical forms from clogging up the beam which hurts both oracle and accuracy metrics.
# 7.3 Features
Effect of features. Table 6(b) shows the accu- racy when some feature types are ablated. The most inï¬uential features are lexicalized phrase- predicate features, which capture the relationship between phrases and logical operations (e.g., relat- ing âlastâ to argmax) as well as between phrases and relations (e.g., relating âbeforeâ to < or Next, and relating âwhoâ to the relation Name).
Anchoring with trigger words. In our parsing algorithm, relations and logical operations are not anchored to the utterance. We consider an alter- native approach where logical operations are an- chored to âtriggerâ phrases, which are hand-coded based on co-occurrence statistics (e.g., we trigger a count logical form with how, many, and total). | 1508.00305#30 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 31 | Table 6(c) shows that the trigger words do not signiï¬cantly impact the accuracy, suggesting that the original system is already able to learn the re- lationship between phrases and operations even without a manual lexicon. As an aside, the huge drop in oracle is because fewer âsemantically in- correctâ logical forms are generated; we discuss this phenomenon in the next subsection.
# 7.4 Semantically correct logical forms
In our setting, we face a new challenge that arises from learning with denotations: with deeper com- positionality, a larger number of nonsensical log- ical forms can execute to the correct denotation. For example, if the target answer is a small num- ber (say, 2), it is possible to count the number of rows with some random properties and arrive at the correct answer. However, as the system en- counters more examples, it can potentially learn to disfavor them by recognizing the characteristics of semantically correct logical forms. | 1508.00305#31 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 32 | logical semantically forms. The system can learn the features of semantically correct logical forms only if it can generate them in the ï¬rst place. To see how well the system can generate correct logical forms, looking at the oracle score is insufï¬cient since bad logical forms can execute to the correct denotations. Instead, we randomly chose 200 ex- amples and manually annotated them with logical forms to see if a trained system can produce the annotated logical form as a candidate. | 1508.00305#32 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 33 | Out of 200 examples, we find that 79% can be manually annotated. The remaining ones in- clude artifacts such as unhandled question types (e.g., yes-no questions, or questions with phrases âsameâ or âconsecutiveâ), table cells that require advanced normalization methods (e.g., cells with comma-separated lists), and incorrect annotations. The system generates the annotated logical form among the candidates in 53.5% of the ex- amples. The missing examples are mostly caused by anchoring errors due to lexical mismatch (e.g., âTtalianâ â Italy, or âno zip codeâ â> an empty cell in the zip code column) or the need to generate complex logical forms from a single phrase (e.g., âMay 2010â â >=.2010-05-0111<=.2010-05-31).
# 7.5 Error analysis | 1508.00305#33 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 34 | # 7.5 Error analysis
The errors on the development data can be divided into four groups. The ï¬rst two groups are unhan- dled question types (21%) and the failure to an- chor entities (25%) as described in Section 7.4. The third group is normalization and type errors (29%): although we handle some forms of en- tity normalization, we observe many unhandled string formats such as times (e.g., 3:45.79) and city-country pairs (e.g., Beijing, China), as well as complex calculation such as computing time peri- ods (e.g., 12pmâ1am â 1 hour). Finally, we have
ranking errors (25%) which mostly occur when the utterance phrase and the relation are obliquely re- lated (e.g., âairplaneâ and Model).
# 8 Discussion
Our work simultaneously increases the breadth of knowledge source and the depth of compositional- ity in semantic parsing. This section explores the connections in both aspects to related work. | 1508.00305#34 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 35 | # 8 Discussion
Our work simultaneously increases the breadth of knowledge source and the depth of compositional- ity in semantic parsing. This section explores the connections in both aspects to related work.
Logical coverage. Different semantic parsing systems are designed to handle different sets of logical operations and degrees of compositional- ity. For example, form-ï¬lling systems (Wang et al., 2011) usually cover a smaller scope of opera- tions and compositionality, while early statistical semantic parsers for question answering (Wong and Mooney, 2007; Zettlemoyer and Collins, 2007) and high-accuracy natural language inter- faces for databases (Androutsopoulos et al., 1995; Popescu et al., 2003) target more compositional utterances with a wide range of logical opera- tions. This work aims to increase the logical coverage even further. For example, compared to the Geoquery dataset, the WIKITABLEQUES- TIONS dataset includes a move diverse set of log- ical operations, and while it does not have ex- tremely compositional questions like in Geoquery (e.g., âWhat states border states that border states that border Florida?â), our dataset contains fairly compositional questions on average. | 1508.00305#35 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 36 | To parse a compositional utterance, many works rely on a lexicon that translates phrases to enti- ties, relations, and logical operations. A lexicon can be automatically generated (Unger and Cimi- ano, 2011; Unger et al., 2012), learned from data (Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011), or extracted from external sources (Cai and Yates, 2013; Berant et al., 2013), but requires some techniques to generalize to unseen data. Our work takes a different approach similar to the log- ical form growing algorithm in Berant and Liang (2014) by not anchoring relations and operations to the utterance.
Knowledge domain. Recent works on seman- tic parsing for question answering operate on more open and diverse data domains. In particular, large-scale knowledge bases have gained popular- ity in the semantic parsing community (Cai and Yates, 2013; Berant et al., 2013; Fader et al., 2014). The increasing number of relations and en- tities motivates new resources and techniques for
improving the accuracy, including the use of ontol- ogy matching models (Kwiatkowski et al., 2013), paraphrase models (Fader et al., 2013; Berant and Liang, 2014), and unlabeled sentences (Krishna- murthy and Kollar, 2013; Reddy et al., 2014). | 1508.00305#36 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 37 | Our work leverages open-ended data from the Web through semi-structured tables. There have been several studies on analyzing or inferring the table schemas (Cafarella et al., 2008; Venetis et al., 2011; Syed et al., 2010; Limaye et al., 2010) and answering search queries by joining tables on sim- ilar columns (Cafarella et al., 2008; Gonzalez et al., 2010; Pimplikar and Sarawagi, 2012). While the latter is similar to question answering, the queries tend to be keyword lists instead of natural language sentences. In parallel, open information extraction (Wu and Weld, 2010; Masaum et al., 2012) and knowledge base population (Ji and Gr- ishman, 2011) extract information from web pages and compile them into structured data. The result- ing knowledge base is systematically organized, but as a trade-off, some knowledge is inevitably lost during extraction and the information is forced to conform to a speciï¬c schema. To avoid these is- sues, we choose to work on HTML tables directly. In future work, we wish to draw informa- tion from other semi-structured formats such as colon-delimited | 1508.00305#37 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 38 | to work on HTML tables directly. In future work, we wish to draw informa- tion from other semi-structured formats such as colon-delimited pairs (Wong et al., 2009), bulleted lists (Gupta and Sarawagi, 2009), and top-k lists (Zhang et al., 2013). Pasupat and Liang (2014) used a framework similar to ours to extract entities from web pages, where the âlogical formsâ were XPath expressions. A natural direction is to com- bine the logical compositionality of this work with the even broader knowledge source of general web pages. | 1508.00305#38 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 39 | Acknowledgements. We gratefully acknowl- edge the support of the Google Natural Language Understanding Focused Program and the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040.
Data and reproducibility. The WIKITABLE- QUESTIONS dataset can be downloaded at http: //nlp.stanford.edu/software/sempre/wikitable/. Additionally, code, data, and experiments for this paper are available on the CodaLab plat- form at https://www.codalab.org/worksheets/ 0xf26cd79d4d734287868923ad1067cf4c/.
# References
Androutsopoulos, G. D. Ritchie, and P. Thanisch. 1995. Natural language interfaces to databases â an introduction. Journal of Natural Language Engineering, 1:29â81.
[Berant and Liang2014] J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL). | 1508.00305#39 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 40 | [Berant and Liang2014] J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL).
[Berant et al.2013] J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).
[Cafarella et al.2008] M. J. Cafarella, A. Halevy, D. Z. Wang, E. Wu, and Y. Zhang. 2008. WebTables: exploring the power of tables on the web. In Very Large Data Bases (VLDB), pages 538â549.
[Cai and Yates2013] Q. Cai and A. Yates. 2013. Large- scale semantic parsing via schema matching and lex- In Association for Computational icon extension. Linguistics (ACL).
[Duchi et al.2010] J. Duchi, E. Hazan, and Y. Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT). | 1508.00305#40 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 41 | [Fader et al.2013] A. Fader, L. Zettlemoyer, and O. Et- zioni. 2013. Paraphrase-driven learning for open In Association for Computa- question answering. tional Linguistics (ACL).
[Fader et al.2014] A. Fader, L. Zettlemoyer, and O. Et- zioni. 2014. Open question answering over curated In International and extracted knowledge bases. Conference on Knowledge Discovery and Data Min- ing (KDD), pages 1156â1165.
[Gonzalez et al.2010] H. Gonzalez, A. Y. Halevy, C. S. Jensen, A. Langen, J. Madhavan, R. Shapley, W. Shen, and J. Goldberg-Kidon. 2010. Google fu- sion tables: web-centered data management and col- In Proceedings of the 2010 ACM SIG- laboration. MOD International Conference on Management of data, pages 1061â1066.
[Gupta and Sarawagi2009] R. Gupta and S. Sarawagi. 2009. Answering table augmentation queries from In Very Large Data unstructured lists on the web. Bases (VLDB), number 1, pages 289â300. | 1508.00305#41 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 42 | [Ji and Grishman2011] H. Ji and R. Grishman. 2011. Knowledge base population: Successful approaches In Association for Computational and challenges. Linguistics (ACL), pages 1148â1158.
[Krishnamurthy and Kollar2013] J. Krishnamurthy and T. Kollar. 2013. Jointly learning to parse and per- ceive: Connecting natural language to the physical world. Transactions of the Association for Compu- tational Linguistics (TACL), 1:193â206.
[Kwiatkowski et al.2011] T. Kwiatkowski, L. Zettle- moyer, S. Goldwater, and M. Steedman. 2011. Lex- ical generalization in CCG grammar induction for semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1512â1523.
[Kwiatkowski et al.2013] T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling seman- tic parsers with on-the-ï¬y ontology matching. In Empirical Methods in Natural Language Processing (EMNLP).
2013. Lambda dependency- based compositional semantics. Technical report, arXiv. | 1508.00305#42 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 43 | 2013. Lambda dependency- based compositional semantics. Technical report, arXiv.
and S. Chakrabarti. 2010. Annotating and searching web tables using entities, types and relationships. In Very Large Data Bases (VLDB), volume 3, pages 1338â1347.
[Masaum et al.2012] Masaum, M. Schmitz, R. Bart, Open S. Soderland, and O. Etzioni. language learning for information extraction. In Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP/CoNLL), pages 523â534.
[Pasupat and Liang2014] P. Pasupat and P. Liang. 2014. Zero-shot entity extraction from web pages. In Association for Computational Linguistics (ACL).
and S. Sarawagi. 2012. Answering table queries on the In Very Large Data web using column keywords. Bases (VLDB), volume 5, pages 908â919.
and H. Kautz. 2003. Towards a theory of natural lan- guage interfaces to databases. In International Con- ference on Intelligent User Interfaces (IUI), pages 149â157. | 1508.00305#43 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 44 | [Price1990] P. Price. 1990. Evaluation of spoken lan- guage systems: The ATIS domain. In Proceedings of the Third DARPA Speech and Natural Language Workshop, pages 91â95.
[Reddy et al.2014] S. Reddy, M. Lapata, and M. Steed- man. 2014. Large-scale semantic parsing with- the out question-answer pairs. Association for Computational Linguistics (TACL), 2(10):377â392.
[Syed et al.2010] Z. Syed, T. Finin, V. Mulwad, and A. Joshi. 2010. Exploiting a web of semantic data for interpreting tables. In Proceedings of the Second Web Science Conference.
[Unger and Cimiano2011] C. Unger and P. Cimiano. 2011. Pythia: compositional meaning construction for ontology-based question answering on the se- In Proceedings of the 16th interna- mantic web. tional conference on Natural language processing and information systems, pages 153â160.
[Unger et al.2012] C. Unger, L. B¨uhmann, J. Lehmann, 2012. A. Ngonga, D. Gerber, and P. Cimiano. Template-based question answering over RDF data. In World Wide Web (WWW), pages 639â648. | 1508.00305#44 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 45 | [Venetis et al.2011] P. Venetis, A. Halevy, J. Madhavan, M. Pas¸ca, W. Shen, F. Wu, G. Miao, and C. Wu. 2011. Recovering semantics of tables on the web. In Very Large Data Bases (VLDB), volume 4, pages 528â538.
[Wang et al.2011] Y. Wang, L. Deng, and A. Acero. 2011. Semantic frame-based spoken language un- Spoken Language Understanding: derstanding. Systems for Extracting Semantic Information from Speech, pages 41â91.
J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Asso- ciation for Computational Linguistics (ACL), pages 960â967.
D. Widdows, [Wong et al.2009] Y. W. Wong, Scalable T. Lokovic, and K. Nigam. attribute-value extraction from semi-structured text. In IEEE International Conference on Data Mining Workshops, pages 302â307.
[Wu and Weld2010] F. Wu and D. S. Weld. 2010. Open In Asso- information extraction using Wikipedia. ciation for Computational Linguistics (ACL), pages 118â127. | 1508.00305#45 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1508.00305 | 46 | [Zelle and Mooney1996] M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using in- ductive logic programming. In Association for the Advancement of Artiï¬cial Intelligence (AAAI), pages 1050â1055.
[Zettlemoyer and Collins2007] L. S. Zettlemoyer and 2007. Online learning of relaxed M. Collins. CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678â687.
[Zhang et al.2013] Z. Zhang, K. Q. Zhu, H. Wang, and H. Li. 2013. Automatic extraction of top-k lists from the web. In International Conference on Data Engineering. | 1508.00305#46 | Compositional Semantic Parsing on Semi-Structured Tables | Two important aspects of semantic parsing for question answering are the
breadth of the knowledge source and the depth of logical compositionality.
While existing work trades off one aspect for another, this paper
simultaneously makes progress on both fronts through a new task: answering
complex questions on semi-structured tables using question-answer pairs as
supervision. The central challenge arises from two compounding factors: the
broader domain results in an open-ended set of relations, and the deeper
compositionality results in a combinatorial explosion in the space of logical
forms. We propose a logical-form driven parsing algorithm guided by strong
typing constraints and show that it obtains significant improvements over
natural baselines. For evaluation, we created a new dataset of 22,033 complex
questions on Wikipedia tables, which is made publicly available. | http://arxiv.org/pdf/1508.00305 | Panupong Pasupat, Percy Liang | cs.CL | null | null | cs.CL | 20150803 | 20150803 | [] |
1507.05910 | 0 | 5 1 0 2
v o N 0 3 ] G L . s c [
3 v 0 1 9 5 0 . 7 0 5 1 : v i X r a
Under review as a conference paper at ICLR 2016
# CLUSTERING IS EFFICIENT FOR APPROXIMATE MAXIMUM INNER PRODUCT SEARCH
# Alex Auvolatâ ´Ecole Normale Sup´erieure, France.
Sarath Chandarâ, Pascal Vincentâ â â Universit´e de Montr´eal, Canada.
Hugo Larochelleâ Twitter Cortex, USA., Universit´e de Sherbrooke, Canada
Yoshua Bengioâ Universit´e de Montr´eal, Canada.
# ABSTRACT | 1507.05910#0 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 1 | Yoshua Bengioâ Universit´e de Montr´eal, Canada.
# ABSTRACT
Efï¬cient Maximum Inner Product Search (MIPS) is an important task that has a wide applicability in recommendation systems and classiï¬cation with a large number of classes. Solutions based on locality-sensitive hashing (LSH) as well as tree-based solutions have been investigated in the recent literature, to perform approximate MIPS in sublinear time. In this paper, we compare these to another extremely simple approach for solving approximate MIPS, based on variants of the k-means clustering algorithm. Speciï¬cally, we propose to train a spherical k- means, after having reduced the MIPS problem to a Maximum Cosine Similarity Search (MCSS). Experiments on two standard recommendation system bench- marks as well as on large vocabulary word embeddings, show that this simple approach yields much higher speedups, for the same retrieval precision, than cur- rent state-of-the-art hashing-based and tree-based methods. This simple method also yields more robust retrievals when the query is corrupted by noise.
# INTRODUCTION | 1507.05910#1 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 2 | # INTRODUCTION
The Maximum Inner Product Search (MIPS) problem has recently received increased attention, as it arises naturally in many large scale tasks. In recommendation systems (Koenigstein et al., 2012; Bachrach et al., 2014), users and items to be recommended are represented as vectors that are learnt at training time based on the user-item rating matrix. At test time, when the model is deployed for suggesting recommendations, given a user vector, the model will perform a dot product of the user vector with all the item vectors and pick top K items with maximum dot product to recommend. With millions of candidate items to recommend, it is usually not possible to do a full linear search within the available time frame of only few milliseconds. This problem amounts to solving a K- MIPS problem. | 1507.05910#2 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 3 | Another common instance where the K-MIPS problem arises is in extreme classiï¬cation tasks (Vi- jayanarasimhan et al., 2014), with a huge number of classes. At inference time, predicting the top-K most likely class labels for a given data point can be cast as a K-MIPS problem. Such extreme (prob- abilistic) classiï¬cation problems occur often in Natural Language Processing (NLP) tasks where the classes are words in a predetermined vocabulary. For example in neural probabilistic language mod- els (Bengio et al., 2003) the probabilities of a next word given the context of the few previous words is computed, in the last layer of the network, as a multiplication of the last hidden layer representa- tion with a very large matrix (an embedding dictionary) that has as many columns as there are words in the vocabulary. Each such column can be seen as corresponding to the embedding of a vocabu- lary word in the hidden layer space. Thus an inner product is taken between each of these and the hidden representation, to yield an inner product âscoreâ for each vocabulary word. Passed through a softmax nonlinearity, these yield the predicted probabilities for all possible words. The ranking of these probability values is unaffected by the softmax layer, so ï¬nding the k most probable words is | 1507.05910#3 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 4 | # âEqual contribution â and CIFAR
1
# Under review as a conference paper at ICLR 2016
exactly equivalent to ï¬nding the ones with the largest inner product scores, i.e. solving a K-MIPS problem.
In many cases the retrieved result need not be exact: it may be sufï¬cient to obtain a subset of k vec- tors whose inner product with the query is very high, and thus highly likely (though not guaranteed) to contain some of the exact K-MIPS vectors. These examples motivate research on approximate K-MIPS algorithms. If we can obtain large speedups over a full linear search without sacriï¬cing too much on precision, it will have a direct impact on such large-scale applications.
Formally the K-MIPS problem is stated as follows: given a set X = {x1, . . . , xn} of points and a query vector q, ï¬nd
argmax'S), q' xi qd)
where the argmaxâ*) notation corresponds to the set of the indices providing the K maximum values. Such a problem can be solved exactly in linear time by calculating all the q' x; and selecting the AK maximum items, but such a method is too costly to be used on large applications where we typically have hundreds of thousands of entries in the set. | 1507.05910#4 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 5 | All the methods discussed in this article are based on the notion of a candidate set, i.e. a subset of the dataset that they return, and on which we will do an exact K-MIPS, making its computation much faster. There is no guarantee that the candidate set contains the target elements, therefore these methods solve approximate K-MIPS. Better algorithms will provide us with candidate sets that are both smaller and have larger intersections with the actual K maximum inner product vectors.
MIPS is related to nearest neighbor search (NNS), and to maximum similarity search. But it is considered a harder problem because the inner product neither satisï¬es the triangular inequality as distances usually do, nor does it satisfy a basic property of similarity functions, namely that the similarity of an entry with itself is at least as large as its similarity with anything else: for a vector x, there is no guarantee that xT x ⥠xT y for all y. Thus we cannot directly apply efï¬cient nearest neighbor search or maximum similarity search algorithms to the MIPS problem.
Given a set X = {x1, . . . , xn} of points and a query vector q, the K-NNS problem with Euclidean distance is deï¬ned as: | 1507.05910#5 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 6 | argmin(K) iâX ||q â xi||2 2 = argmax(K) iâX qT xi â ||xi||2 2 2 (2)
and the maximum cosine similarity problem (K-MCSS) is deï¬ned as:
argmax(K) iâX qT xi ||q|| ||xi|| = argmax(K) iâX qT xi ||xi|| (3)
K-NNS and K-MCSS are different problems than K-MIPS, but it is easy to see that all three become equivalent provided all data vectors xi have the same Euclidean norm. Several approaches to MIPS make use of this observation and ï¬rst transform a MIPS problem into a NNS or MCSS problem.
In this paper, we propose and empirically investigate a very simple approach for the approximate K-MIPS problem. It consists in ï¬rst reducing the problem to an approximate K-MCSS problem (as has been previously done in (Shrivastava and Li, 2015) ) on top of which we perform a spherical k-means clustering. The few clusters whose centers best match the query yield the candidate set. | 1507.05910#6 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 7 | The rest of the paper is organized as follows: In section 2, we review previously proposed ap- proaches for MIPS. Section 3 describes our proposed simple solution k-means MIPS in more details and section 4 discusses ways to further improve the performance by using a hierarchical k-means version. In section 5, we empirically compare our methods to the state-of-the-art in tree-based and hashing-based approaches, on two standard collaborative ï¬ltering benchmarks and on a larger word embedding datasets. Section 6 concludes the paper with discussion on future work.
# 2 RELATED WORK
There are two common types of solution for MIPS in the literature: tree-based methods and hashing- based methods. Tree-based methods are data dependent (i.e. ï¬rst trained to adapt to the speciï¬c data set) while hash-based methods are mostly data independent.
2
# Under review as a conference paper at ICLR 2016 | 1507.05910#7 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 8 | 2
# Under review as a conference paper at ICLR 2016
Tree-based approaches: The Maximum Inner Product Search problem was ï¬rst formalized in (Ram and Gray, 2012). Ram and Gray (2012) provided a tree-based solution for the problem. Speciï¬cally, they constructed a ball tree with vectors in the database and bounded the maximum inner product with a ball. Their novel analytical upper bound for maximum inner product of a given point with points in a ball made it possible to design a branch and bound algorithm to solve MIPS using the constructed ball tree. Ram and Gray (2012) also proposes a dual-tree based search using cone trees when you have a batch of queries. One issue with this ball-tree based approach (IP-Tree) is that it partitions the set of data points based on the Euclidean distance, while the problem hasnât effectively been converted to NNS. In contrast, PCA-Tree (Bachrach et al., 2014), the current state-of-the-art tree-based approach to MIPS, ï¬rst converts MIPS to NNS by appending an additional component to the vector that ensures that all vectors are of constant norm. This is followed by PCA and by a balanced kd-tree style tree construction. | 1507.05910#8 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 9 | Hashing based approaches: Shrivastava and Li (2014) is the ï¬rst work to propose an explicit Asymmetric Locality Sensitive Hashing (ALSH) construction to perform MIPS. They converted MIPS to NNS and used the L2-LSH algorithm (Datar et al., 2004). Subsequently, Shrivastava and Li (2015) proposed another construction to convert MIPS to MCSS and used the Signed Random Projection (SRP) hashing method. Both works were based on the assumption that a symmetric- LSH family does not exist for MIPS problem. Later, Neyshabur and Srebro (2015) showed an explicit construction of a symmetric-LSH algorithm for MIPS which had better performance than the previous ALSH algorithms. Finally, Vijayanarasimhan et al. (2014) propose to use Winner-Take- All hashing to pick top-K classes to consider during training and inference in large classiï¬cation problems. | 1507.05910#9 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 10 | Hierarchical softmax: A notable approach to address the problem of scaling classiï¬ers to a huge number of classes is the hierarchical softmax (Morin and Bengio, 2005). It is based on prior clus- tering of the words into a binary, or more generally n-ary tree that serves as a ï¬xed structure for the learning process of the model. The complexity of training is reduced from O(n) to O(log n). Due to its clustering and tree structure, it resembles the MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax deï¬nes the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to that leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better though as efï¬ciently searching for top winners of what amounts to a large ordinary ï¬at softmax.
3
# k-MEANS CLUSTERING FOR APPROXIMATE MIPS
In this section, we propose a simple k-means clustering based solution for approximate MIPS.
# 3.1 MIPS TO MCSS | 1507.05910#10 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 11 | In this section, we propose a simple k-means clustering based solution for approximate MIPS.
# 3.1 MIPS TO MCSS
We follow the previous work by Shrivastava and Li (2015) for reducing the MIPS problem to the MCSS problem by ingeniously rescaling the vectors and adding new components, making the norms of all the vectors approximately the same. Let X = {x1, . . . , xn} be our dataset. Let U < 1 and m â Nâ be parameters of the algorithm. The ï¬rst step is to scale all the vectors in our dataset by the same factor such that maxi ||xi||2 = U . We then apply two mappings P and Q, one on the data points and another on the query vector. These two mappings simply concatenate m new components to the vectors making the norms of the data points all roughly the same. The mappings are deï¬ned as follows:
P (x) = [x, 1/2 â ||x||2 Q(x) = [x, 0, 0, . . . , 0] 2, 1/2 â ||x||4 2, . . . , 1/2 â ||x||2m 2 ] (4) (5) | 1507.05910#11 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 12 | As shown in Shrivastava and Li (2015), mapping P brings all the vectors to roughly the same norm: we have ||P (xi)||2 , with the last term vanishing as m â +â, since ||xi||2 ⤠U < 1. We thus have the following approximation of MIPS by MCSS for any query vector q,
+ (K) Tt ck) Q(q) | P(ai) argmax; q ~ argmax, TTT â * lQ(@)ll2 - ||P(wa)|l2 (6)
3
# Under review as a conference paper at ICLR 2016
3.2 MCSS USING SPHERICAL k-MEANS
Assuming all data points x1, . . . , xn have been transformed as xj â P (xj) so as to be scaled to a norm of approximately 1, then the spherical k-means1 algorithm (Zhong, 2005) can efï¬ciently be used to do approximate MCSS. Algorithm 1 is a formal speciï¬cation of the spherical k-means algorithm, where we denote by ci the centroid of cluster i (i â {1, . . . , K}) and aj the index of the cluster assigned to each point xj.
# Algorithm 1 Spherical k-means | 1507.05910#12 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 13 | # Algorithm 1 Spherical k-means
a; + rand(k) while c; or a; changed at previous step do vj ila GH TS aj â argmaxje sy... k}e end while
The difference between standard k-means clustering and spherical k-means is that in the spherical variant, the data points are clustered not according to their position in the Euclidian space, but according to their direction.
To find the one vector that has maximum cosine similarity to query point q in a dataset clustered by this method, we first find the cluster whose centroid has the best cosine similarity with the query vector â i.e. the i such that q' c; is maximal â and consider all the points belonging to that cluster as the candidate set. We then simply take aLZMAX j|q, =; qi a; as an approximation for our maximum cosine similarity vector. This method can be extended for finding the & maximum cosine similarity vectors: we compute the cosine similarity between the query and all the vectors of the candidate set and take the & best matches.
One issue with constructing a candidate set from a single cluster is that the quality of the set will be poor for points close to the boundaries between clusters. To alleviate this problem, we can increase the size of candidate sets by constructing them instead from the top-p best matching clusters to construct our candidate set. | 1507.05910#13 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 14 | We note that other approximate search methods exploit similar ideas. For example, Bachrach et al. (2014) proposes a so-called neighborhood boosting method for PCA-Tree, by considering the path to each leaf as a binary vector (based on decision to go left or right) and given a target leaf, consider all other leaves which are one hamming distance away.
# 4 HIERARCHICAL k-MEANS FOR FASTER AND MORE PRECISE SEARCH
While using a single-level clustering of the data points might yield a sufï¬ciently fast search proce- dure for moderately large databases, it can be insufï¬cient for much larger collections.
â | 1507.05910#14 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 15 | â
n clusters so that each cluster contains Indeed, if we have n points, by clustering our dataset into n). approximately If we use the single closest cluster as a candidate set, then the candidate set size is of the order of â n. But as mentioned earlier, we will typically want to consider the two or three closest clusters as a candidate set, in order to limit problems arising from the query points close to the boundary between clusters or when doing approximate K-MIPS with K fairly big (for example 100). A consequence of increasing candidate sets this way is that they can quickly grow wastefully big, containing many unwanted items. To restrict the candidate sets to a smaller count of better targeted items, we would need to have smaller clusters, but then the search for the best matching clusters becomes the most expensive part. To address this situation, we propose an approach where we cluster our dataset into many small clusters, and then cluster the small clusters into bigger clusters, and so on any number of times. Our approach is thus a bottom-up clustering approach.
1Note that we use K to refer to the number of top-K items to retrieve in search and k for the number of clusters in k-means. These two quantities are otherwise not the same.
4
# Under review as a conference paper at ICLR 2016 | 1507.05910#15 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 16 | 4
# Under review as a conference paper at ICLR 2016
For example, we can cluster our datasets in n2/3 ï¬rst-level, small clusters, and then cluster the centroids of the ï¬rst-level clusters into n1/3 second-level clusters, making our data structure a two- layer hierarchical clustering. This approach can be generalized to as many levels of clustering as necessary.
a ) @ JAN AN AN [OF ; iis AX
Figure 1: Walk down a hierarchical clustering tree: at each level we have a candidate set for the next level. In the ï¬rst level, the dashed red boxed represent the p best matches, which gives us a candidate set for the second level, etc.
To search for the small clusters that best match the query point and will constitute a good candidate set, we go down the hierarchy keeping at each level only the p best matching clusters. This process is illustrated in Figure 1. Since at all levels the clusters are of much smaller size, we can take much larger values for p, for example p = 8 or p = 16. | 1507.05910#16 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 17 | Formally, if we have L levels of clustering, let Il be a set of indices for the clusters at level l â {0, . . . , L}. Let c(l) i } conveniently i deï¬ned as being the data points themselves, and let a(l) i â Ilâ1, i â Il be the assignment of the centroids c(l) to the clusters of layer l â 1. The candidate set is found using the method described in i Algorithm 2. Our candidate set is the set CL obtained at the end of the algorithm. In our approach,
# Algorithm 2 Search in hierarchical spherical k-means
Co = Io for /=0,..., Lâ1do i A; = argmaxâ?),, qT ef ) Cy = {ila ila; (+1) ⬠Ai} end for return C, | 1507.05910#17 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 18 | we do a bottom-up clustering, i.e. we ï¬rst cluster the dataset into small clusters, then we cluster the small cluster into bigger clusters, and so on until we get to the top level which is only one cluster. Other approaches have been suggested such as in (Mnih and Hinton, 2009), where the method employed is a top-down clustering strategy where at each level the points assigned to the current cluster are divided in smaller clusters. The approach of (Mnih and Hinton, 2009) also addresses the problem that using a single lowest-level cluster as a candidate set is an inaccurate solution by having the data points be in multiple clusters. We use an alternative solution that consists in exploring several branches of the clustering hierarchy in parallel.
# 5 EXPERIMENTS
In this section, we will evaluate the proposed algorithm for approximate MIPS. Speciï¬cally, we analyze the following characteristics: speedup, compared to the exact full linear search, of retrieving top-K items with largest inner product, and robustness of retrieved results to noise in the query.
5
# Under review as a conference paper at ICLR 2016
# 5.1 DATASETS
We have used 2 collaborative ï¬ltering datasets and 1 word embedding dataset, which are descibed below: | 1507.05910#18 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 19 | # 5.1 DATASETS
We have used 2 collaborative ï¬ltering datasets and 1 word embedding dataset, which are descibed below:
Movielens-10M: A collaborative filtering dataset with 10,677 movies (items) and 69,888 users. Given the user-item matrix Z, we follow the pureSVD procedure described in 2010) to generate user and movie vectors. Specifically, we subtracted the average rating of each user rom his individual ratings and considered unobserved entries as zeros. Then we compute an SVD approximation of Z with its top 150 singular components, Z ~ WSR?. Each row in WS is used as the vector representation of the user and each row in R is the vector representation of the movie. We construct a database of all 10,677 movies and consider 60,000 randomly selected users as queries.
Netï¬ix: Another standard collaborative ï¬ltering dataset with 17,770 movies (items) and 480,189 users. We follow the same procedure as described for movielens but construct 300 dimensional vector representations, as is standard in the literature (Neyshabur and Srebro, 2015). We consider 60,000 randomly selected users as queries. | 1507.05910#19 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 20 | Word2vec embeddings: We use the 300-dimensional word2vec embeddings released by Mikolov et al. (2013). We construct a database composed of the ï¬rst 100,000 word embedding vectors. We consider two types of queries: 2,000 randomly selected word vectors from that database, and 2,000 randomly selected word vectors from the database corrupted with Gaussian noise. This acts as a test bench to evaluate the performance of different algorithms based on the characteristics of the queries.
5.2 BASELINES
We consider the following baselines to compare with.
PCA-Tree: PCA-Tree (Bachrach et al., 2014) is the state-of-the-art tree-based method which was shown to be superior to IP-Tree (Koenigstein et al., 2012). This method ï¬rst converts MIPS to NNS by appending an additional component to the vectors to make them of constant norm. Then the principal directions are learnt and the data is projected using these principal directions. Finally, a balanced tree is constructed using as splitting criteria at each level the median of component values along the corresponding principal direction. Each level uses a different principal direction, in decreasing order of variance. | 1507.05910#20 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 21 | SRP-Hash: This is the signed random projection hashing method for MIPS proposed in Shrivastava and Li (2015). SRP-Hash converts MIPS to MCSS by vector augmentation. We consider n hash functions and each hash function considers p random projections of the vector to compute the hash.
WTA-Hash: Winner Takes All hashing (Vijayanarasimhan et al., 2014) is another hashing-based baseline which also converts MIPS to MCSS by vector augmentation. We consider n hash func- tions and each hash function does p different random permutations of the vector. Then the preï¬x constituted by the ï¬rst k elements of each permuted vector is used to construct the hash for the vector.
5.3 SPEEDUP RESULTS
In these ï¬rst experiments, we consider the two collaborative ï¬ltering tasks and evaluate the speedup provided by the different approximate K-MIPS algorithms (for K â {1, 10, 100}) compared to the exact full search. Note that this section does not include the hierarchical version of k-means in the experiments, as the databases were small enough (less than 20,000) for ï¬at k-means to perform well.
Speciï¬cally, speedup is deï¬ned as | 1507.05910#21 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 22 | Speciï¬cally, speedup is deï¬ned as
speedupA0 (A) = Time taken by Algorithm A0 Time taken by Algorithm A (7)
where A0 is the exact linear search algorithm that consists in computing the inner product with all training items. Because we want to compare the preformance of algorithms, rather than of specif- ically optimized implementations, we approximate the time with the number of dot product opera- tions computed by the algorithm2. In other words, our unit of time is the time taken by a dot product.
2For example, k-means algorithm was run using GPU while PCA-Tree was run using CPU.
6
# Under review as a conference paper at ICLR 2016
All algorithms return a set of candidates for which we do exact linear seacrh. This induces a number of dot products at least as large as the size of the identiï¬ed candidate set. In addition to the candidate set size, the following operations count towards the count of dot products:
k-means: dot products done with all cluster centroids involved in ï¬nding the top-p clusters of the (hierarchical) search.
PCA-Tree: dot product done to project the query to the PCA space. Note that if the tree is of depth d, then we need to do d dot products to project the query. | 1507.05910#22 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 23 | SRP-Hash: total number of random projections of the data (each random projection is considered a single dot product). If we have n hashes with p random projections each, then the cost is p â n.
WTA-Hash: a full random permutation of the vector involves the same number of query element access operations as a single dot product. However, we consider only k preï¬xes in the permutations, which means we only need to do a fraction of dot product. While a dot product involves accessing all d components of the vector, each permutation in WTA-Hash only needs to access k elements of the vector. So we consider its cost to be a fraction k/d of the cost of a dot product. Speciï¬cally, if we have n hash functions each with p random permutations and consider preï¬xes of length k, then the total cost would be n â p â k/d where d is the dimension of the vector. | 1507.05910#23 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 24 | Let us call true top-K the actual K elements from the database that have the largest inner products with the query. Let us call retrieved top-K the K elements, among the candidate set retrieved by a speciï¬c approximate MIPS, that have the largest inner products with the query. We deï¬ne precision for K-MIPS as the number of elements in the intersection of true top-K and retrived top-K vectors, divided by K.
precision at K = |retrieved top K â© true top K| K (8)
We varied hyper-parameters of each algorithm (k in k-means, depth in PCA-Tree, number of hash functions in SRP-Hash and WTA-Hash), and computed the precision and speedup in each case. Resulting precision v.s. speedup curves obtained for the Movielens-10M and Netï¬ix datasets are reported in Figure 2. We make the following observations from these results:
⢠Hashing-based methods perform better with lower speedups. But their performance de- crease rapidly after 10x speedup.
PCA-Tree performs better than SRP-Hash. ⢠WTA-Hash performs better than PCA-Tree with lower speedups. However, their perfor- mance degrades faster as the speedup increases and PCA-Tree outperforms WTA-Hash with higer speedups. | 1507.05910#24 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 26 | In this experiment, we consider a word embedding retrieval task. As a ï¬rst experiment, we con- sider using a query set of 2,000 embeddings, corresponding to a subset of a large database of pre- trained embeddings. Note that while a query is thus present in the database, it is not guaranteed to correspond to the top-1 MIPS result. Also, weâll be interested in the top-10 and top-100 MIPS performance. Algorithms which perform better in top-10 and top-100 MIPS for queries which al- ready belong to the database preserve the neighborhood of data points better. Figure 3 shows the precision vs. speedup curve for top-1, top-10 and top-100 MIPS. From the results, we can see that data dependent algorithms (k-means and PCA-Tree) better preserve the neighborhood, compared to data independent algorithms (SRP-Hash, WTA-Hash), which is not surprising. However, k-means and hierarchical k-means performs signiï¬cantly better than PCA-Tree in top-10 and top-100 MIPS suggesting that it is better than PCA-Tree in capturing the neighborhood. One reason might be that k-means has the global view of the vector at every step while PCA-Tree considers one dimension at a time. | 1507.05910#26 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 27 | As the next experiment, we would like to study how different algorithms behave with respect to the noise in the query. For a fair comparison, we chose hyper-parameters for each model such that the
7
# Under review as a conference paper at ICLR 2016
(a) (b) (c) (d) (e) (f)
Figure 2: Speedup results in collaborative ï¬ltering. (a-c) correspond to precision in top 1,10,100 MIPS on Movielens-10M dataset, while (d-f) correspond to precision in top 1,10,100 MIPS on Netï¬ix dataset respectively. k-means(3) means k-means algorithm that considers top 3 clusters as candidate set.
(a) (b) (c)
precision@1
Figure 3: Speedup results in word embedding retrieval. (a-c) correspond to precision in top 1,10,100 MIPS respectively. k-means(3) means k-means algorithm that considers top 3 clusters as candidate set. and hier-k-means(8)s means a 2 level hierarchical k-means algorithm that considers top 8 clus- ters as candidate set. | 1507.05910#27 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 28 | speedup is the same (we set it to 30x) for all algorithms. We take 2,000 random word embeddings from the database and corrupt them random Gaussian noise. We vary the scale of the noise from 0 to 0.4 and plot the performance. Figure 4 shows the performance of various algorithms on the top-1, top-10, top-100 MIPS problems, as the noise increases. We can see that k-means always performs better than other algorithms, even with increase in noise. Also, the performance of k- means remains reasonable, compared to other algorithms. These results suggest that our approach might be particularly appropriate in a scenario where word embeddings are simultaneously being trained, and are thus not ï¬xed. In such a scenario, having a robust MIPS method would allow us to update the MIPS model less frequently.
8
# Under review as a conference paper at ICLR 2016
(a) (b) (c)
Figure 4: Precision in top-K retrieval as the noise in the query increases. We increase the standard deviation of the Gaussian noise and we see that k-means performs better than other algorithms.
# 6 CONCLUSION AND FUTURE WORK | 1507.05910#28 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 29 | # 6 CONCLUSION AND FUTURE WORK
In this paper, we have proposed a new and efï¬cient way of solving approximate K-MIPS based on a simple clustering strategy, and showed it can be a good alternative to the more popular LSH or tree-based techniques. We regard the simplicity of this approach as one of its strengths. Empirical results on three real-world datasets show that this simple approach clearly outperforms the other families of techniques. It achieves a larger speedup while maintaining precision, and is more robust to input corruption, an important property for generalization, as query test points are expected to not be exactly equal to training data points. Clustering MIPS generalizes better to related, but unseen data than the hashing approaches we evaluated.
In future work, we plan to research ways to adapt on-the-ï¬y the clustering for our approximate K- MIPS as its input representation evolves during the learning of a model, leverage efï¬cient K-MIPS to speed up extreme classiï¬er training and improve precision and speedup by combining multiple clusterings. | 1507.05910#29 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 30 | Finally, we mention that, while putting the ï¬nal touches to this paper, another very recent and dif- ferent MIPS approach, based on vector quantization, came to our knowledge (Guo et al., 2015). We highlight that the ï¬rst arXiv post of our work predates their work. Nevertheless, while we did not have time to empirically compare to this approach here, we hope to do so in future work.
# ACKNOWLEDGEMENTS
The authors would like to thank the developers of Theano (Bergstra et al., 2010) for developing such a powerful tool. We acknowledge the support of the following organizations for research funding and computing support: Samsung, NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs and CIFAR.
# REFERENCES
Yoram Bachrach, Yehuda Finkelstein, Ran Gilad-Bachrach, Liran Katzir, Noam Koenigstein, Nir Nice, and Ulrich Paquet. Speeding up the xbox recommender system using a euclidean transfor- mation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, RecSys â14, pages 257â264, 2014. | 1507.05910#30 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 31 | Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic ISSN 1532-4435. URL language model. J. Mach. Learn. Res., 3:1137â1155, March 2003. http://dl.acm.org/citation.cfm?id=944919.944966.
James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU In Proceedings of the Python for Scientiï¬c Computing Conference math expression compiler. (SciPy), June 2010.
9
# Under review as a conference paper at ICLR 2016
Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys â10, pages 39â46, 2010. | 1507.05910#31 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 32 | Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Twentieth Annual Symposium on Computational Geometry, SCG â04, pages 253â262, 2004.
Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. CoRR, abs/1509.01469, 2015. URL http://arxiv.org/abs/1509. 01469.
Noam Koenigstein, Parikshit Ram, and Yuval Shavitt. Efï¬cient retrieval of recommendations in In Proceedings of the 21st ACM International Conference a matrix factorization framework. on Information and Knowledge Management, CIKM â12, pages 535â544, New York, NY, USA, 2012. ACM. | 1507.05910#32 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 33 | Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa- tions of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Sys- tems 26, pages 3111â3119. Curran Associates, Inc., 2013.
Andriy Mnih and Geoffrey Hinton. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems, volume 21, pages 1081â1088, 2009.
Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Robert G. Cowell and Zoubin Ghahramani, editors, Proceedings of the Tenth International Workshop on Artiï¬cial Intelligence and Statistics, pages 246â252, 2005.
Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 31st International Conference on Machine Learning, 2015. | 1507.05910#33 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.05910 | 34 | Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 31st International Conference on Machine Learning, 2015.
Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. In Pro- ceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD â12, pages 931â939, 2012.
Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum in- ner product search (MIPS). In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2321â2329, 2014.
Improved asymmetric locality sensitive hashing (alsh) for maximum inner product search (mips). In Proceedings of Conference on Uncertainty in Artiï¬cial Intelligence (UAI), 2015.
Sudheendra Vijayanarasimhan, Jon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with large output spaces. arXiv preprint arXiv:1412.7479, 2014. | 1507.05910#34 | Clustering is Efficient for Approximate Maximum Inner Product Search | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise. | http://arxiv.org/pdf/1507.05910 | Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio | cs.LG, cs.CL, stat.ML | 10 pages, Under review at ICLR 2016 | null | cs.LG | 20150721 | 20151130 | [] |
1507.02030 | 0 | 5 1 0 2
t c O 8 2 ] G L . s c [
3 v 0 3 0 2 0 . 7 0 5 1 : v i X r a
# Beyond Convexity: Stochastic Quasi-Convex Optimization
Kï¬r Y. Levyâ Shai Shalev-Shwartzâ¡
# Elad Hazanâ
May 2014
# Abstract
Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized eï¬- ciently using Stochastic Gradient Descent (SGD). | 1507.02030#0 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 1 | The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the con- cept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for ï¬rst-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants.
Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.
1
# Introduction | 1507.02030#1 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 2 | Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.
1
# Introduction
The benefits of using the Stochastic Gradient Descent (SGD) scheme for learning could not be stressed enough. For convex and Lipschitz objectives, SGD is guaranteed to find an e- optimal solution within O(1/e?) iterations and requires only an unbiased estimator for the gradient, which is obtained with only one (or a few) data samples. However, when applied to non-convex problems several drawbacks are revealed. In particular, SGD is widely used for deep learning Bengio (2009), one of the most interesting fields where stochastic non-convex optimization problems arise. Often, the objective in these kind of problems demonstrates
âPrinceton University; [email protected]. â Technion; [email protected]. â¡The Hebrew University; [email protected].
1
two extreme phenomena: on the one hand plateaus, i.e., regions with very small gradients; and on the other hand sharp cliï¬s, i.e., exceedingly high gradients. As is expected, applying SGD to these problems is often reported to yield unsatisfactory results. | 1507.02030#2 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.