doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1601.04468 | 37 | Och, F. J. (2003). Minimum error rate training in statistical machine translation. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL), Edmonton, Canada.
Papandreou, G. and Yuille, A. (2011). Perturb-and-map random ï¬elds: Using discrete opti- In IEEE International Conference on mization to learn and sample from energy models. Computer Vision (ICCV), Barcelona, Spain.
Polyak, B. T. (1987). Introduction to Optimization. Optimization Software, Inc., New York.
Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855.
Polyak, B. T. and Tsypkin, Y. Z. (1973). Pseudogradient adaptation and training algorithms. Automation and remote control: a translation of Avtomatika i Telemekhanika, 34(3):377â397. | 1601.04468#37 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 38 | Riezler, S. and Maxwell, J. (2005). On some pitfalls in automatic evaluation and signiï¬cance testing for MT. In Proceedings of the ACL-05 Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, Ann Arbor, MI.
Riezler, S., Simianer, P., and Haas, C. (2014). Response-based learning for grounded machine translation. In Meeting of the Association for Computational Linguistics (ACL), Baltimore, MD, USA.
Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Statistical Society, 55:527â535.
Saluja, A. and Zhang, Y. (2014). Online discriminative learning for machine translation with binary-valued feedback. Machine Translation, 28:69â90.
Shalev-Shwartz, S. (2012). Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107â194.
Smith, D. A. and Eisner, J. (2006). Minimum risk annealing for training log-linear models. In International Committee on Computational Linguistics and the Association for Computa- tional Linguistics (COLING-ACL), Sydney, Australia. | 1601.04468#38 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 39 | Snover, M., Dorr, B., Schwartz, R., Micciulla, L., and Makhoul, J. (2006). A study of translation In Conference of the Association for Machine edit rate with targeted human annotation. Translation in the Americas (AMTA), Cambridge, MA, USA.
Sokolov, A., Riezler, S., and Cohen, S. B. (2015). A coactive learning view of online structured prediction in statistical machine translation. In Proceedings of the Conference on Computa- tional Natural Language Learning (CoNLL), Beijing, China.
Spall, J. C. (2003). Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. Wiley.
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processings Systems (NIPS), Vancouver, Canada.
Taskar, B., Klein, D., Collins, M., Koller, D., and Manning, C. (2004). Max-margin parsing. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona, Spain. | 1601.04468#39 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.04468 | 40 | Tsochantaridis, I., Joachims, T., Hofmann, T., and Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 5:1453â1484.
Wuebker, J., Muehr, S., Lehnen, P., Peitz, S., and Ney, H. (2015). A comparison of update strate- gies for large-scale maximum expected bleu training. In Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL-HLT), Denver, CO, USA.
Yue, Y. and Joachims, T. (2009). Interactively optimizing information retrieval systems as a du- eling bandits problem. In International Conference on Machine Learning (ICML), Montreal, Canada.
Yuille, A. and He, X. (2012). Probabilistic models of vision and max-margin methods. Frontiers of Electrical and Electronic Engineering, 7(1). | 1601.04468#40 | Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning. | http://arxiv.org/pdf/1601.04468 | Artem Sokolov, Stefan Riezler, Tanguy Urvoy | cs.CL, cs.LG | In Proceedings of MT Summit XV, 2015. Miami, FL | null | cs.CL | 20160118 | 20160118 | [] |
1601.01705 | 0 | 6 1 0 2 n u J 7 ] L C . s c [
4 v 5 0 7 1 0 . 1 0 6 1 : v i X r a
# Learning to Compose Neural Networks for Question Answering
Jacob Andreas and Marcus Rohrbach and Trevor Darrell and Dan Klein Department of Electrical Engineering and Computer Sciences University of California, Berkeley {jda,rohrbach,trevor,klein}@eecs.berkeley.edu
# Abstract
We describe a question answering model that applies to both images and structured knowl- edge bases. The model uses natural lan- guage strings to automatically assemble neu- ral networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly param- eters via reinforcement learning, with only (world, question, answer) triples as supervi- sion. Our approach, which we term a dynamic neural module network, achieves state-of-the- art results on benchmark datasets in both vi- sual and structured domains.
What cities are in Georgia? Atlanta â t Module inventory (Section 4.1) âand Et =f = (a) Network layout (Section 4.2) and Find[eity] relate[in] (b) Jookup[Georgia] (d) Knowledge source | 1601.01705#0 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 1 | Figure 1: A learned syntactic analysis (a) is used to assemble a collection of neural modules (b) into a deep neural network (c), and applied to a world representation (d) to produce an answer.
1
# Introduction
This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and struc- tured knowledge bases. The model translates from questions to dynamically assembled neural net- works, then applies these networks to world rep- resentations (images or knowledge bases) to pro- duce answers. We take advantage of two largely independent lines of work: on one hand, an exten- sive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous represen- tations.
Previous work has used manually-speciï¬ed modular structures for visual learning (Andreas et al., 2016). Here we:
⢠learn a network structure predictor jointly with module parameters themselves
⢠extend visual primitives from previous work to reason over structured world representations | 1601.01705#1 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 2 | ⢠learn a network structure predictor jointly with module parameters themselves
⢠extend visual primitives from previous work to reason over structured world representations
Training data consists of (world, question, answer) triples: our approach requires no supervision of net- work layouts. We achieve state-of-the-art perfor- mance on two markedly different question answer- ing tasks: one with questions about natural im- ages, and another with more compositional ques- tions about United States geography.1
Our model has two components, trained jointly: ï¬rst, a collection of neural âmodulesâ that can be freely composed (Figure 1a); second, a network lay- out predictor that assembles modules into complete deep networks tailored to each question (Figure 1b).
# 2 Deep networks as functional programs
We begin with a high-level discussion of the kinds of composed networks we would like to learn.
1We have released our code at http://github.com/ jacobandreas/nmn2 | 1601.01705#2 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 3 | 1We have released our code at http://github.com/ jacobandreas/nmn2
Andreas et al. (2016) describe a heuristic ap- proach for decomposing visual question answering tasks into sequence of modular sub-problems. For example, the question What color is the bird? might be answered in two steps: ï¬rst, âwhere is the bird?â (Figure 2a), second, âwhat color is that part of the image?â (Figure 2c). This ï¬rst step, a generic mod- ule called find, can be expressed as a fragment of a neural network that maps from image features and a lexical item (here bird) to a distribution over pix- els. This operation is commonly referred to as the attention mechanism, and is a standard tool for ma- nipulating images (Xu et al., 2015) and text repre- sentations (Hermann et al., 2015). | 1601.01705#3 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 4 | The ï¬rst contribution of this paper is an exten- sion and generalization of this mechanism to enable fully-differentiable reasoning about more structured semantic representations. Figure 2b shows how the same module can be used to focus on the entity Georgia in a non-visual grounding domain; more generally, by representing every entity in the uni- verse of discourse as a feature vector, we can obtain a distribution over entities that corresponds roughly to a logical set-valued denotation.
Having obtained such a distribution, existing neu- ral approaches use it to immediately compute a weighted average of image features and project back into a labeling decisionâa describe module (Fig- ure 2c). But the logical perspective suggests a num- ber of novel modules that might operate on atten- tions: e.g. combining them (by analogy to conjunc- tion or disjunction) or inspecting them directly with- out a return to feature space (by analogy to quantiï¬- cation, Figure 2d). These modules are discussed in detail in Section 4. Unlike their formal counterparts, they are differentiable end-to-end, facilitating their integration into learned models. Building on previ- ous work, we learn behavior for a collection of het- erogeneous modules from (world, question, answer) triples. | 1601.01705#4 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 5 | The second contribution of this paper is a model for learning to assemble such modules composition- ally. Isolated modules are of limited useâto ob- tain expressive power comparable to either formal approaches or monolithic deep networks, they must be composed into larger structures. Figure 2 shows simple examples of composed structures, but for realistic question-answering tasks, even larger netblack and white true t waists (d) describe (c) state (a) (b) Montgomery Georgia Atlanta Gee® Ges® @ss®
Figure 2: Simple neural module networks, corresponding to the questions What color is the bird? and Are there any states? (a) A neural find module for computing an attention over pixels. (b) The same operation applied to a knowledge base. (c) Using an attention produced by a lower module to identify the color of the region of the image attended to. (d) Performing quantiï¬cation by evaluating an attention directly.
works are required. Thus our goal is to automati- cally induce variable-free, tree-structured computa- tion descriptors. We can use a familiar functional notation from formal semantics (e.g. Liang et al., 2011) to represent these computations.2 We write the two examples in Figure 2 as
(describe[color] find[bird])
and
(exists find[state]) | 1601.01705#5 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 6 | (describe[color] find[bird])
and
(exists find[state])
respectively. These are network layouts: they spec- ify a structure for arranging modules (and their lex- ical parameters) into a complete network. Andreas et al. (2016) use hand-written rules to deterministi- cally transform dependency trees into layouts, and are restricted to producing simple structures like the above for non-synthetic data. For full generality, we will need to solve harder problems, like transform- ing What cities are in Georgia? (Figure 1) into
(and
find[city] (relate[in] lookup[Georgia]))
In this paper, we present a model for learning to se- lect such structures from a set of automatically gen- erated candidates. We call this model a dynamic neural module network.
2But note that unlike formal semantics, the behavior of the primitive functions here is itself unknown.
# 3 Related work | 1601.01705#6 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 7 | 2But note that unlike formal semantics, the behavior of the primitive functions here is itself unknown.
# 3 Related work
There is an extensive literature on database ques- tion answering, in which strings are mapped to log- ical forms, then evaluated by a black-box execu- tion model to produce answers. Supervision may be provided either by annotated logical forms (Wong and Mooney, 2007; Kwiatkowski et al., 2010; An- dreas et al., 2013) or from (world, question, answer) triples alone (Liang et al., 2011; Pasupat and Liang, 2015). In general the set of primitive functions from which these logical forms can be assembled is ï¬xed, but one recent line of work focuses on induc- ing new predicates functions automatically, either from perceptual features (Krishnamurthy and Kol- lar, 2013) or the underlying schema (Kwiatkowski et al., 2013). The model we describe in this paper has a uniï¬ed framework for handling both the per- ceptual and schema cases, and differs from existing work primarily in learning a differentiable execution model with continuous evaluation results. | 1601.01705#7 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 8 | Neural models for question answering are also a subject of current interest. These include approaches that model the task directly as a multiclass classiï¬- cation problem (Iyyer et al., 2014), models that at- tempt to embed questions and answers in a shared vector space (Bordes et al., 2014) and attentional models that select words from documents sources (Hermann et al., 2015). Such approaches generally require that answers can be retrieved directly based on surface linguistic features, without requiring in- termediate computation. A more structured ap- proach described by Yin et al. (2015) learns a query execution model for database tables without any nat- ural language component. Previous efforts toward unifying formal logic and representation learning in- clude those of Grefenstette (2013), Krishnamurthy and Mitchell (2013), Lewis and Steedman (2013), and Beltagy et al. (2013).
The visually-grounded component of this work relies on recent advances in convolutional net- works for computer vision (Simonyan and Zisser- man, 2014), and in particular the fact that late convo- lutional layers in networks trained for image recog- nition contain rich features useful for other vision tasks while preserving spatial information. These features have been used for both image captioning (Xu et al., 2015) and visual QA (Yang et al., 2015). | 1601.01705#8 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 9 | Most previous approaches to visual question an- swering either apply a recurrent model to deep rep- resentations of both the image and the question (Ren et al., 2015; Malinowski et al., 2015), or use the question to compute an attention over the input im- age, and then answer based on both the question and the image features attended to (Yang et al., 2015; Xu and Saenko, 2015). Other approaches include the simple classiï¬cation model described by Zhou et al. (2015) and the dynamic parameter prediction network described by Noh et al. (2015). All of these models assume that a ï¬xed computation can be performed on the image and question to compute the answer, rather than adapting the structure of the computation to the question. | 1601.01705#9 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 10 | As noted, Andreas et al. (2016) previously con- sidered a simple generalization of these attentional approaches in which small variations in the net- work structure per-question were permitted, with the structure chosen by (deterministic) syntactic pro- cessing of questions. Other approaches in this gen- eral family include the âuniversal parserâ sketched by Bottou (2014), the graph transformer networks of Bottou et al. (1997), the knowledge-based neu- ral networks of Towell and Shavlik (1994) and the recursive neural networks of Socher et al. (2013), which use a ï¬xed tree structure to perform further linguistic analysis without any external world rep- resentation. We are unaware of previous work that simultaneously learns both parameters for and struc- tures of instance-speciï¬c networks.
# 4 Model
Recall that our goal is to map from questions and world representations to answers. This process in- volves the following variables:
1. w a world representation 2. x a question 3. y an answer 4. z a network layout 5. θ a collection of model parameters
Our model is built around two distributions: a lay- out model p(z|x; 6) which chooses a layout for a sentence, and a execution model p-(y|ww; 9.) which applies the network specified by z to w. | 1601.01705#10 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 11 | For ease of presentation, we introduce these mod- els in reverse order. We ï¬rst imagine that z is always
observed, and in|Section 4.1|describe how to evalu- ate and learn modules parameterized by 6. within fixed structures. In we move to the real scenario, where z is unknown. We describe how to predict layouts from questions and learn 6, and 62 jointly without layout supervision.
# 4.1 Evaluating modules
Given a layout z, we assemble the corresponding modules into a full neural network (Figure Tf), and apply it to the knowledge representation. Interme- diate results flow between modules until an answer is produced at the root. We denote the output of the network with layout z on input world w as [z]w: when explicitly referencing the substructure of z, we can alternatively write [m(h!, h?)] for a top-level module m with submodule outputs h! and h?. We then define the execution model:
(1)
# pe(ylw) =
([2]w)y | 1601.01705#11 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 12 | (1)
# pe(ylw) =
([2]w)y
(This assumes that the root module of z produces a distribution over labels y.) The set of possible layouts z is restricted by module type constraints: some modules (like find above) operate directly on the input representation, while others (like describe above) also depend on input from speciï¬c earlier modules. Two base types are considered in this pa- per are Attention (a distribution over pixels or enti- ties) and Labels (a distribution over answers). | 1601.01705#12 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 13 | Parameters are tied across multiple instances of the same module, so different instantiated networks may share some parameters but not others. Modules have both parameter arguments (shown in square brackets) and ordinary inputs (shown in parenthe- ses). Parameter arguments, like the running bird example in are provided by the layout, and are used to specialize module behavior for par- ticular lexical items. Ordinary inputs are the re- sult of computation lower in the network. In ad- dition to parameter-specific weights, modules have global weights shared across all instances of the module (but not shared with other modules). We write A,a,B,b,... for global weights and uâ,v! for weights associated with the parameter argument 7. © and © denote (possibly broadcasted) elementwise addition and multiplication respectively. The com- plete set of global weights and parameter-specific weights constitutes 6.. Every module has access to
the world representation, represented as a collection of vectors w1, w2, . . . (or W expressed as a matrix). The nonlinearity Ï denotes a rectiï¬ed linear unit.
The modules used in this paper are shown below, with names and type constraints in the ï¬rst row and a description of the moduleâs computation following. | 1601.01705#13 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 14 | The modules used in this paper are shown below, with names and type constraints in the ï¬rst row and a description of the moduleâs computation following.
(â Attention) Lookup lookup[i] produces an attention focused entirely at the index f (i), where the relationship f between words and positions in the input map is known ahead of time (e.g. string matches on database ï¬elds).
= ef (i) (2)
[Lookup [i] ]
where ei is the basis vector that is 1 in the ith position and 0 elsewhere.
(â Attention) Find find[i] computes a distribution over indices by con- catenating the parameter argument with each position of the input feature map, and passing the concatenated vector through a MLP:
[ina tii] = softmax(a © o(Bu' ® CW @d)) (3)
# [ina tii]
Relate (Attention > Attention) relate directs focus from one region of the input to another. It behaves much like the find module, but also conditions its behavior on the current region of attention h. Let w(h) = So), hew*, where hj, is the k** element of h. Then,
[relate [i] (h)]] = softmax(a © o(Bu' @ CW @ Dti(h) Ge)) (4) | 1601.01705#14 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 15 | [relate [i] (h)]] = softmax(a © o(Bu' @ CW @ Dti(h) Ge)) (4)
And (Attention* â Attention) and performs an operation analogous to set intersec- tion for attentions. The analogy to probabilistic logic suggests multiplying probabilities: [ana(h},h?,...)J =k Ono-:- (5)
(Attention â Labels) Describe describe[i] computes a weighted average of w under the input attention. This average is then used to predict an answer representation. With ¯w as above,
describe[i](h) = softmax(AÏ(B ¯w(h) + vi)) (6)
Exists (Attention â Labels) exists is the existential quantifier, and inspects the incoming attention directly to produce a label, rather than an intermediate feature vector like describe:
Jexists)(h)] = softmax (( max he)a+ ) (7)
What cities are in Georgia?
are Georgia? be eS. 1 (b) what 1 u 1 â I / v °â relate[in] find[city] i (c) Lookup[ Georgia] 2 a relate[in] ° 3 find[city] 2 lookup[ Georgia] (d) 2 relate[in} s¢ Lookup[Georgia]
(a) | 1601.01705#15 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 16 | (a)
Figure 3: Generation of layout candidates. The input sentence (a) is represented as a dependency parse (b). Fragments of this dependency parse are then associated with appropriate modules (c), and these fragments are assembled into full layouts (d).
With z observed, the model we have described so far corresponds largely to that of Andreas et al. (2016), though the module inventory is differentâ in particular, our new exists and relate modules do not depend on the two-dimensional spatial struc- ture of the input. This enables generalization to non- visual world representations.
Learning in this simplified setting is straightfor- ward. Assuming the top-level module in each layout is a describe or exists module, the fully- instan- tiated network corresponds to a distribution over la- bels conditioned on layouts. To train, we maximize DX (wyy,z) 108 P2(y|w; Ve) directly. This can be under- stood as a parameter-tying scheme, where the deci- sions about which parameters to tie are governed by the observed layouts z.
# 4.2 Assembling networks
Next we describe the layout model p(z|; 6). We first use a fixed syntactic parse to generate a small set of candidate layouts, analogously to the way a semantic grammar generates candidate semantic parses in previous work (Berant and Liang, 2014). | 1601.01705#16 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 17 | A semantic parse differs from a syntactic parse in two primary ways. First, lexical items must be
mapped onto a (possibly smaller) set of semantic primitives. Second, these semantic primitives must be combined into a structure that closely, but not ex- actly, parallels the structure provided by syntax. For example, state and province might need to be identi- ï¬ed with the same ï¬eld in a database schema, while all states have a capital might need to be identiï¬ed with the correct (in situ) quantiï¬er scope. | 1601.01705#17 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 18 | While we cannot avoid the structure selection problem, continuous representations simplify the lexical selection problem. For modules that accept a vector parameter, we associate these parameters with words rather than semantic tokens, and thus turn the combinatorial optimization problem asso- ciated with lexicon induction into a continuous one. Now, in order to learn that province and state have the same denotation, it is sufï¬cient to learn that their associated parameters are close in some embedding spaceâa task amenable to gradient descent. (Note that this is easy only in an optimizability sense, and not an information-theoretic oneâwe must still learn to associate each independent lexical item with the correct vector.) The remaining combinatorial problem is to arrange the provided lexical items into the right computational structure. In this respect, layout prediction is more like syntactic parsing than ordinary semantic parsing, and we can rely on an off-the-shelf syntactic parser to get most of the way there. In this work, syntactic structure is provided by the Stanford dependency parser (De Marneffe and Manning, 2008).
The construction of layout candidates is depicted in Figure 3, and proceeds as follows:
1. Represent the input sentence as a dependency tree.
2. Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula. | 1601.01705#18 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 19 | 1. Represent the input sentence as a dependency tree.
2. Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula.
3. Associate each of these with a layout frag- ment: Ordinary nouns and verbs are mapped to a single find module. Proper nouns to a sin- gle lookup module. Prepositional phrases are mapped to a depth-2 fragment, with a relate module for the preposition above a find mod- ule for the enclosed head noun.
4. Form subsets of this set of layout fragments. For each subset, construct a layout candidate by
joining all fragments with an and module, and inserting either a measure or describe module at the top (each subset thus results in two parse candidates.)
All layouts resulting from this process feature a relatively ï¬at tree structure with at most one con- junction and one quantiï¬er. This is a strong sim- plifying assumption, but appears sufï¬cient to cover most of the examples that appear in both of our tasks. As our approach includes both categories, re- lations and simple quantiï¬cation, the range of phe- nomena considered is generally broader than pre- vious perceptually-grounded QA work (Krishna- murthy and Kollar, 2013; Matuszek et al., 2012). | 1601.01705#19 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 20 | Having generated a set of candidate parses, we need to score them. This is a ranking problem; as in the rest of our approach, we solve it using standard neural machinery. In particular, we pro- duce an LSTM representation of the question, a feature-based representation of the query, and pass both representations through a multilayer perceptron (MLP). The query feature vector includes indicators on the number of modules of each type present, as well as their associated parameter arguments. While one can easily imagine a more sophisticated parse- scoring model, this simple approach works well for our tasks.
Formally, for a question x, let hq(x) be an LSTM encoding of the question (i.e. the last hidden layer of an LSTM applied word-by-word to the input ques- tion). Let {z1, z2, . . .} be the proposed layouts for x, and let f (zi) be a feature vector representing the ith layout. Then the score s(zi|x) for the layout zi is
s(zi|v) = a'o(Bhg(x) +Cf(u)+4) (8) ie. the output of an MLP with inputs h,(x) and f(z), and parameters 0, = {a,B,C,d}. Finally, we normalize these scores to obtain a distribution: | 1601.01705#20 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 21 | n t;0¢) = ene) [> 8(zile) (9) j=l D(%
Having defined a layout selection module p(z|z;9¢) and a network execution model pz(y|w;9e), we are ready to define a model for predicting answers given only (world, question) pairs. The key constraint is that we want to min- imize evaluations of p.(y|w;@-) (which involves
expensive application of a deep network to a large input representation), but can tractably evaluate p(z|x;4¢) for all z (which involves application of a shallow network to a relatively small set of candidates). This is the opposite of the situation usually encountered semantic parsing, where calls to the query execution model are fast but the set of candidate parses is too large to score exhaustively.
In fact, the problem more closely resembles the scenario faced by agents in the reinforcement learn- ing setting (where it is cheap to score actions, but potentially expensive to execute them and obtain re- wards). We adopt a common approach from that lit- erature, and express our model as a stochastic pol- icy. Under this policy, we first sample a layout z from a distribution p(z|2;9¢), and then apply z to the knowledge source and obtain a distribution over answers p(y|z, w; 0c). | 1601.01705#21 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 22 | After z is chosen, we can train the execution model directly by maximizing log p(y|z, w; 6.) with respect to 6. as before (this is ordinary backprop- agation). Because the hard selection of z is non- differentiable, we optimize p(z|x; 67) using a policy gradient method. The gradient of the reward surface J with respect to the parameters of the policy is
VJ(8¢) = E[V log p(z|x; 4c) - r]
(this is the REINFORCE rule (Williams, 1992)). Here the expectation is taken with respect to rollouts of the policy, and r is the reward. Because our goal is to select the network that makes the most accurate predictions, we take the reward to be identically the negative log-probability from the execution phase, i.e.
[(V log p(z|a; Oe) - log p(y|z,w;4.)] CL | 1601.01705#22 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 23 | [(V log p(z|a; Oe) - log p(y|z,w;4.)] CL
Thus the update to the layout-scoring model at each timestep is simply the gradient of the log-probability of the chosen layout, scaled by the accuracy of that layoutâs predictions. At training time, we approxi- mate the expectation with a single rollout, so at each step we update 6) in the direction (V log p(z|x; 9¢))- log p(y|z, w; 9) for a single z ~ p(z|x;6¢). A. and 0, are optimized using ADADELTA with p = 0.95, e = leâ6 and gradient clipping at a norm of 10.
(10)
What is in the sheepâs ear? What color is she wearing? What is the man dragging? (describe[what] (describe[color] (describe[what] (and find[sheep] find[ear])) find[wear]) find[man]) tag white boat (board) | 1601.01705#23 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 25 | i » a
Figure 4: Sample outputs for the visual question answering task. The second row shows the ï¬nal attention provided as in- put to the top-level describe module. For the ï¬rst two exam- ples, the model produces reasonable parses, attends to the cor- rect region of the images (the ear and the womanâs clothing), and generates the correct answer. In the third image, the verb is discarded and a wrong answer is produced.
# 5 Experiments
The framework described in this paper is general, and we are interested in how well it performs on datasets of varying domain, size and linguistic com- plexity. To that end, we evaluate our model on tasks at opposite extremes of both these criteria: a large visual question answering dataset, and a small col- lection of more structured geography questions.
# 5.1 Questions about images
Our ï¬rst task is the recently-introduced Visual Ques- tion Answering challenge (VQA) (Antol et al., 2015). The VQA dataset consists of more than 200,000 images paired with human-annotated ques- tions and answers, as in Figure 4. | 1601.01705#25 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 26 | We use the VQA 1.0 release, employing the de- velopment set for model selection and hyperparam- eter tuning, and reporting ï¬nal results from the eval- uation server on the test-standard set. For the ex- periments described in this section, the input feature representations wi are computed by the the ï¬fth con- volutional layer of a 16-layer VGGNet after pooling (Simonyan and Zisserman, 2014). Input images are scaled to 448Ã448 before computing their represen- tations. We found that performance on this task was
test-dev test-std Yes/No Number Other All All Zhou (2015) 76.6 Noh (2015) 80.7 Yang (2015) 79.3 81.2 NMN 81.1 D-NMN 35.0 37.2 36.6 38.0 38.6 42.6 41.7 46.1 44.0 45.5 55.7 57.2 58.7 58.6 59.4 55.9 57.4 58.9 58.7 59.4
Table 1: Results on the VQA test server. NMN is the parameter-tying model from Andreas et al. (2015), and D-NMN is the model described in this paper. | 1601.01705#26 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 27 | best if the candidate layouts were relatively simple: only describe, and and find modules are used, and layouts contain at most two conjuncts.
One weakness of this basic framework is a difï¬- culty modeling prior knowledge about answers (of the form most bears are brown). This kinds of lin- guistic âpriorâ is essential for the VQA task, and easily incorporated. We simply introduce an extra hidden layer for recombining the ï¬nal module net- work output with the input sentence representation hq(x) (see Equation 8), replacing Equation 1 with:
log pz(y|w, x) = (Ahq(x) + B (12)
# z]w)y
(Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to inï¬uence the answer directly. | 1601.01705#27 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 28 | (Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to inï¬uence the answer directly.
Results are shown in Table 1. The use of dynamic networks provides a small gain, most noticeably on âotherâ questions. We achieve state-of-the-art re- sults on this task, outperforming a highly effective visual bag-of-words model (Zhou et al., 2015), a model with dynamic network parameter prediction (but ï¬xed network structure) (Noh et al., 2015), a more conventional attentional model (Yang et al., 2015), and a previous approach using neural mod- ule networks with no structure prediction (Andreas et al., 2016).
Some examples are shown in Figure 4. In general, the model learns to focus on the correct region of the image, and tends to consider a broad window around the region. This facilitates answering questions like Where is the cat?, which requires knowledge of the surroundings as well as the object in question.
Accuracy Model GeoQA GeoQA+Q LSP-F LSP-W NMN D-NMN 48 51 51.7 54.3 â â 35.7 42.9 | 1601.01705#28 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 29 | Table 2: Results on the GeoQA dataset, and the GeoQA dataset with quantiï¬cation. Our approach outperforms both a purely logical model (LSP-F) and a model with learned percep- tual predicates (LSP-W) on the original dataset, and a ï¬xed- structure NMN under both evaluation conditions.
# 5.2 Questions about geography
The next set of experiments we consider focuses on GeoQA, a geographical question-answering task ï¬rst introduced by Krishnamurthy and Kollar (2013). This task was originally paired with a vi- sual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical se- mantic parser backed by a database, and another which induces logical predicates using linear clas- siï¬ers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical se- mantics, as well as strictly logical approaches. | 1601.01705#29 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 30 | The GeoQA domain consists of a set of entities (e.g. states, cities, parks) which participate in vari- ous relations (e.g. north-of, capital-of). Here we take the world representation to consist of two pieces: a set of category features (used by the find module) and a different set of relational features (used by the relate module). For our experiments, we use a sub- set of the features originally used by Krishnamurthy et al. The original dataset includes no quantiï¬ers, and treats the questions What cities are in Texas? and Are there any cities in Texas? identically. Be- cause we are interested in testing the parserâs ability to predict a variety of different structures, we intro- duce a new version of the dataset, GeoQA+Q, which distinguishes these two cases, and expects a Boolean answer to questions of the second kind. | 1601.01705#30 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 31 | Results are shown in Table 2. As in the orig- inal work, we report the results of leave-one- environment-out cross-validation on the set of 10 enIs Key Largo an island? (exists (and lookup[key-largo] find[island])) yes: correct What national parks are in Florida? (and find[park] (relate[in] lookup[florida])) everglades: correct What are some beaches in Florida? (exists (and lookup[beach] (relate[in] lookup[florida]))) yes (daytona-beach): wrong parse What beach city is there in Florida? (and lookup[beach] lookup[city] (relate[in] lookup[florida])) [none] (daytona-beach): wrong module behavior
Figure 5: Example layouts and answers selected by the model on the GeoQA dataset. For incorrect predictions, the correct answer is shown in parentheses. | 1601.01705#31 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 32 | Figure 5: Example layouts and answers selected by the model on the GeoQA dataset. For incorrect predictions, the correct answer is shown in parentheses.
vironments. Our dynamic model (D-NMN) outper- forms both the logical (LSP-F) and perceptual mod- els (LSP-W) described by (Krishnamurthy and Kol- lar, 2013), as well as a ï¬xed-structure neural mod- ule net (NMN). This improvement is particularly notable on the dataset with quantiï¬ers, where dy- namic structure prediction produces a 20% relative improvement over the ï¬xed baseline. A variety of predicted layouts are shown in Figure 5.
# 6 Conclusion
We have introduced a new model, the dynamic neu- ral module network, for answering queries about both structured and unstructured sources of informa- tion. Given only (question, world, answer) triples as training data, the model learns to assemble neu- ral networks on the ï¬y from an inventory of neural models, and simultaneously learns weights for these modules so that they can be composed into novel structures. Our approach achieves state-of-the-art results on two tasks. We believe that the success of this work derives from two factors: | 1601.01705#32 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 33 | Continuous representations improve the expres- siveness and learnability of semantic parsers: by re- placing discrete predicates with differentiable neural network fragments, we bypass the challenging com- binatorial optimization problem associated with in- duction of a semantic lexicon. In structured world
representations, neural predicate representations al- low the model to invent reusable attributes and re- lations not expressed in the schema. Perhaps more importantly, we can extend compositional question- answering machinery to complex, continuous world representations like images.
Semantic structure prediction improves general- ization in deep networks: by replacing a ï¬xed net- work topology with a dynamic one, we can tailor the computation performed to each problem instance, using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters. In practice, this re- sults in considerable gains in speed and sample efï¬- ciency, even with very little training data.
These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation.
# Acknowledgments | 1601.01705#33 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 34 | These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation.
# Acknowledgments
JA is supported by a National Science Foundation Graduate Fellowship. MR is supported by a fellow- ship within the FIT weltweit-Program of the German Academic Exchange Service (DAAD). This work was additionally supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS- 1427425 and IIS-1212798, and the Berkeley Vision and Learning Center.
# References
Jacob Andreas, Andreas Vlachos, and Stephen Clark. In 2013. Semantic parsing as machine translation. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Soï¬a, Bulgaria.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Pro- ceedings of the Conference on Computer Vision and Pattern Recognition.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answer- In Proceedings of the International Conference ing. on Computer Vision. | 1601.01705#34 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 35 | Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Gar- rette, Katrin Erk, and Raymond Mooney. 2013. Mon- tague meets markov: Deep semantics with probabilis- tic logical form. Proceedings of the Joint Conference
on Distributional and Logical Semantics, pages 11â 21.
Jonathan Berant and Percy Liang. 2014. Semantic pars- In Proceedings of the Annual ing via paraphrasing. Meeting of the Association for Computational Linguis- tics, volume 7, page 92.
Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.
L´eon Bottou, Yoshua Bengio, and Yann Le Cun. 1997. Global training of document processing systems us- ing graph transformer networks. In Proceedings of the Conference on Computer Vision and Pattern Recogni- tion, pages 489â494. IEEE.
L´eon Bottou. 2014. From machine learning to machine reasoning. Machine learning, 94(2):133â149.
Marie-Catherine De Marneffe and Christopher D Man- ning. 2008. The Stanford typed dependencies repre- sentation. In Proceedings of the International Confer- ence on Computational Linguistics, pages 1â8. | 1601.01705#35 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 36 | Edward Grefenstette. 2013. Towards a formal distribu- tional semantics: Simulating logical calculi with ten- sors. Joint Conference on Lexical and Computational Semantics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684â1692.
Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neu- ral network for factoid question answering over para- graphs. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing.
Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics.
Jayant Krishnamurthy and Tom Mitchell. 2013. Vec- tor space semantic parsing: A framework for compo- In Proceedings of the sitional vector space models. ACL Workshop on Continuous Vector Space Models and their Compositionality. | 1601.01705#36 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 37 | Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilis- tic CCG grammars from logical form with higher- In Proceedings of the Conference order uniï¬cation. on Empirical Methods in Natural Language Process- ing, pages 1223â1233, Cambridge, Massachusetts. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on- the-ï¬y ontology matching. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
Mike Lewis and Mark Steedman. 2013. Combining distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179â 192.
2011. Learning dependency-based compositional semantics. In Proceedings of the Human Language Technology Conference of the Association for Computational Lin- guistics, pages 590â599, Portland, Oregon. | 1601.01705#37 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 38 | 2011. Learning dependency-based compositional semantics. In Proceedings of the Human Language Technology Conference of the Association for Computational Lin- guistics, pages 590â599, Portland, Oregon.
Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the International Conference on Computer Vision. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettle- moyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded at- tribute learning. In International Conference on Ma- chine Learning.
Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. 2015. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756.
Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Ex- ploring models and data for image question answer- In Advances in Neural Information Processing ing. Systems. | 1601.01705#38 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 39 | Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Ex- ploring models and data for image question answer- In Advances in Neural Information Processing ing. Systems.
K Simonyan and A Zisserman. 2014. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics. 1994. Knowledge-based artiï¬cial neural networks. Artiï¬cial Intelligence, 70(1):119â165.
Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256.
Yuk Wah Wong and Raymond J. Mooney. 2007. Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, volume 45, page 960.
2015. Ask, attend and answer: Exploring question-guided spatial atten- arXiv preprint tion for visual question answering. arXiv:1511.05234. | 1601.01705#39 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01705 | 40 | 2015. Ask, attend and answer: Exploring question-guided spatial atten- arXiv preprint tion for visual question answering. arXiv:1511.05234.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual In International Conference on Machine attention. Learning.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention net- 2015. works for image question answering. arXiv preprint arXiv:1511.02274.
Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965.
Matthew D Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701. ADADELTA: An arXiv preprint
Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple base- arXiv preprint line for visual question answering. arXiv:1512.02167. | 1601.01705#40 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | [
{
"id": "1511.05234"
},
{
"id": "1512.02167"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1512.00965"
}
] |
1601.01280 | 0 | 6 1 0 2
n u J 6 ] L C . s c [
2 v 0 8 2 1 0 . 1 0 6 1 : v i X r a
# Language to Logical Form with Neural Attention
Li Dong and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected]
# Abstract
Semantic parsing aims at mapping nat- language to machine interpretable ural meaning representations. Traditional ap- proaches rely on high-quality lexicons, and linguis- templates, manually-built tic features which are either domain- or representation-speciï¬c. In this pa- per we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vec- tors. Experimental results on four datasets show that our approach performs compet- itively without using hand-engineered fea- tures and is easy to adapt across domains and meaning representations.
# Introduction | 1601.01280#0 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 1 | # Introduction
Semantic parsing is the task of translating text to a formal meaning representation such as log- ical forms or structured queries. There has re- cently been a surge of interest in developing ma- chine learning methods for semantic parsing (see the references in Section 2), due in part to the existence of corpora containing utterances anno- tated with formal meaning representations. Fig- ure 1 shows an example of a question (left hand- side) and its annotated logical form (right hand- side), taken from JOBS (Tang and Mooney, 2001), a well-known semantic parsing benchmark. In or- der to predict the correct logical form for a given utterance, most previous systems rely on prede- ï¬ned templates and manually designed features, which often render the parsing model domain- or representation-speciï¬c. In this work, we aim to use a simple yet effective method to bridge the gap between natural language and logical form with minimal domain knowledge.
Attention Layer answer(J,(compa ny(J,'microsoftâ),j ob(J),not((req_de g(J'bses)))))) what microsoft jobs - G do not requirea âây> 2 â, bscs? |â» Ws Input Sequence Sequence/Tree _ Logical Utterance Encoder Decoder Form | 1601.01280#1 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 3 | Encoder-decoder architectures based on recur- rent neural networks have been successfully ap- plied to a variety of NLP tasks ranging from syn- tactic parsing (Vinyals et al., 2015a), to machine translation (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), and image description generation (Karpathy and Fei- Fei, 2015; Vinyals et al., 2015b). As shown in Figure 1, we adapt the general encoder-decoder paradigm to the semantic parsing task. Our model learns from natural language descriptions paired with meaning representations; it encodes sentences and decodes logical forms using recur- rent neural networks with long short-term memory (LSTM) units. We present two model variants, the ï¬rst one treats semantic parsing as a vanilla sequence transduction task, whereas our second model is equipped with a hierarchical tree decoder which explicitly captures the compositional struc- ture of logical forms. We also introduce an atten- tion mechanism (Bahdanau et al., 2015; Luong et al., 2015b) allowing the model to learn soft align- ments between natural language and logical forms and present an argument identiï¬cation step to han- dle rare mentions of entities and numbers. | 1601.01280#3 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 5 | The problem of learning semantic parsers has received signiï¬cant attention, dating back to Woods (1973). Many approaches learn from sen- tences paired with logical forms following vari- ous modeling strategies. Examples include the use of parsing models (Miller et al., 1996; Ge and Mooney, 2005; Lu et al., 2008; Zhao and Huang, 2015), inductive logic programming (Zelle and Mooney, 1996; Tang and Mooney, 2000; Thom- spon and Mooney, 2003), probabilistic automata (He and Young, 2006), string/tree-to-tree transfor- mation rules (Kate et al., 2005), classiï¬ers based on string kernels (Kate and Mooney, 2006), ma- chine translation (Wong and Mooney, 2006; Wong and Mooney, 2007; Andreas et al., 2013), and combinatory categorial grammar induction tech- niques (Zettlemoyer and Collins, 2005; Zettle- moyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011). Other work learns semantic parsers without relying on logical- from annotations, e.g., from sentences paired with conversational logs (Artzi and | 1601.01280#5 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 6 | Other work learns semantic parsers without relying on logical- from annotations, e.g., from sentences paired with conversational logs (Artzi and Zettlemoyer, 2011), system demonstrations (Chen and Mooney, 2011; Goldwasser and Roth, 2011; Artzi and Zettle- moyer, 2013), question-answer pairs (Clarke et al., 2010; Liang et al., 2013), and distant supervi- sion (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013; Reddy et al., 2014). | 1601.01280#6 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 7 | Our model learns from natural language de- scriptions paired with meaning representations. Most previous systems rely on high-quality lex- templates, and features icons, manually-built which are either domain- or representation- speciï¬c. We instead present a general method that can be easily adapted to different domains and meaning representations. We adopt the general encoder-decoder framework based on neural net- works which has been recently repurposed for var- ious NLP tasks such as syntactic parsing (Vinyals et al., 2015a), machine translation (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014), image description generation (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015b), ques- tion answering (Hermann et al., 2015), and sum- marization (Rush et al., 2015).
Mei et al. (2016) use a sequence-to-sequence model to map navigational instructions to actions. | 1601.01280#7 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 8 | Mei et al. (2016) use a sequence-to-sequence model to map navigational instructions to actions.
Our model works on more well-deï¬ned meaning representations (such as Prolog and lambda cal- culus) and is conceptually simpler; it does not employ bidirectionality or multi-level alignments. Grefenstette et al. (2014) propose a different ar- chitecture for semantic parsing based on the com- bination of two neural network models. The ï¬rst model learns shared representations from pairs of questions and their translations into knowledge base queries, whereas the second model generates the queries conditioned on the learned representa- tions. However, they do not report empirical eval- uation results.
# 3 Problem Formulation
Our aim is to learn a model which maps natural language input q = x1 · · · x|q| to a logical form representation of its meaning a = y1 · · · y|a|. The conditional probability p (a|q) is decomposed as:
lal pala) = [[ p wily<t.) () t=1
where y<t = y1 · · · ytâ1. | 1601.01280#8 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 9 | lal pala) = [[ p wily<t.) () t=1
where y<t = y1 · · · ytâ1.
Our method consists of an encoder which en- codes natural language input q into a vector repre- sentation and a decoder which learns to generate y1, · · · , y|a| conditioned on the encoding vector. In the following we describe two models varying in the way in which p (a|q) is computed.
# 3.1 Sequence-to-Sequence Model
This model regards both input q and output a as sequences. As shown in Figure 2, the encoder and decoder are two different L-layer recurrent neural networks with long short-term memory (LSTM) units which recursively process tokens one by one. The ï¬rst |q| time steps belong to the encoder, while the following |a| time steps belong to the decoder. t â Rn denote the hidden vector at time Let hl step t and layer l. hl
hl t = LSTM tâ1, hlâ1 hl t (2) | 1601.01280#9 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 10 | hl t = LSTM tâ1, hlâ1 hl t (2)
where LSTM refers to the LSTM function being used. In our experiments we follow the architec- ture described in Zaremba et al. (2015), however other types of gated activation functions are pos- sible (e.g., Cho et al. (2014)). For the encoder, h0 t = Wqe(xt) is the word vector of the current input token, with Wq â RnÃ|Vq| being a parame- ter matrix, and e(·) the index of the corresponding
o > Lnust }>Lnust }: woof Glo kK @09)@09) GOO) G00) G05) G00) Vg\-2â Xq\-1 Liq) <s> YW Y2 >| >| nN gl. 4 : }>Lnust }>Lvusi }>s v
Figure 2: model with two-layer recurrent neural networks. | 1601.01280#10 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 11 | Figure 2: model with two-layer recurrent neural networks.
token. For the decoder, h0 t = Wae(ytâ1) is the word vector of the previous predicted word, where Wa â RnÃ|Va|. Notice that the encoder and de- coder have different LSTM parameters. sequence the input x1, · · · , x|q| are encoded into vectors, they are used to initialize the hidden states of the ï¬rst time step in the decoder. Next, the hidden vector of the topmost LSTM hL in the decoder is used to pre- t dict the t-th output token as:
p(utlyct, 4) = softmax (Woh?)"e(y:) 3)
where Wo â R|Va|Ãn is a parameter matrix, and e (yt) â {0, 1}|Va| a one-hot vector for computing ytâs probability from the predicted distribution.
We augment every sequence with a âstart-of- sequenceâ <s> and âend-of-sequenceâ </s> to- ken. The generation process terminates once </s> is predicted. The conditional probability of gener- ating the whole sequence p (a|q) is then obtained using Equation (1).
# 3.2 Sequence-to-Tree Model | 1601.01280#11 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 12 | # 3.2 Sequence-to-Tree Model
The SEQ2SEQ model has a potential drawback in that it ignores the hierarchical structure of logical forms. As a result, it needs to memorize various pieces of auxiliary information (e.g., bracket pairs) to generate well-formed output. In the following we present a hierarchical tree decoder which is more faithful to the compositional nature of mean- ing representations. A schematic description of the model is shown in Figure 3.
The present model shares the same encoder with the sequence-to-sequence model described in Sec- tion 3.1 (essentially it learns to encode input q as vectors). However, its decoder is fundamentally different as it generates logical forms in a top- down manner. In order to represent tree structure,
lambda $0 e <n> </s> | |- cS iA Sp 13 3 3 EWES = = departure _time <n> Nonterminal $0 </s> â» Start decoding --> Parent feeding ism | Encoder unit Decoder unit
Figure 3: Sequence-to-tree (SEQ2TREE) model with a hierarchical tree decoder. | 1601.01280#12 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 13 | Figure 3: Sequence-to-tree (SEQ2TREE) model with a hierarchical tree decoder.
we deï¬ne a ânonterminalâ <n> token which in- dicates subtrees. As shown in Figure 3, we pre- process the logical form âlambda $0 e (and (>(de- parture time $0) 1600:ti) (from $0 dallas:ci))â to a tree by replacing tokens between pairs of brackets with nonterminals. Special tokens <s> and <(> denote the beginning of a sequence and nontermi- nal sequence, respectively (omitted from Figure 3 due to lack of space). Token </s> represents the end of sequence.
After encoding input q, the hierarchical tree de- coder uses recurrent neural networks to generate tokens at depth 1 of the subtree corresponding to parts of logical form a. If the predicted token is <n>, we decode the sequence by conditioning on the nonterminalâs hidden vector. This process terminates when no more nonterminals are emit- ted. In other words, a sequence decoder is used to hierarchically generate the tree structure. | 1601.01280#13 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 14 | In contrast to the sequence decoder described in Section 3.1, the current hidden state does not only depend on its previous time step. In order to better utilize the parent nonterminalâs information, we introduce a parent-feeding connection where the hidden vector of the parent nonterminal is con- catenated with the inputs and fed into LSTM.
As an example, Figure 4 shows the decoding tree corresponding to the logical form âA B (C)â, where y1 · · · y6 are predicted tokens, and t1 · · · t6 denote different time steps. Span â(C)â corre- sponds to a subtree. Decoding in this example has two steps: once input q has been encoded, we ï¬rst generate y1 · · · y4 at depth 1 until token </s> is
yi=A y2=B y3=<n> ya=</s>
Figure 4: A SEQ2TREE decoding example for the logical form âA B (C)â.
predicted; next, we generate y5, y6 by condition- ing on nonterminal t3âs hidden vectors. The prob- ability p (a|q) is the product of these two sequence decoding steps: | 1601.01280#14 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 15 | p (a|q) = p (y1y2y3y4|q) p (y5y6|yâ¤3, q)
where Equation (3) is used for the prediction of each output token.
# 3.3 Attention Mechanism
As shown in Equation (3), the hidden vectors of the input sequence are not directly used in the decoding process. However, it makes intuitively sense to consider relevant information from the in- put to better predict the current token. Following this idea, various techniques have been proposed to integrate encoder-side information (in the form of a context vector) at each time step of the de- coder (Bahdanau et al., 2015; Luong et al., 2015b; Xu et al., 2015).
As shown in Figure 5, in order to ï¬nd rele- vant encoder-side context for the current hidden state hL t of decoder, we compute its attention score with the k-th hidden state in the encoder as:
t exp{hf -h/} 8S othe by al exp{hj -hP}
where hL |q| are the top-layer hidden vec- tors of the encoder. Then, the context vector is the weighted sum of the hidden vectors in the encoder:
lal c= S- sth (6) k=1 | 1601.01280#15 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 16 | lal c= S- sth (6) k=1
In lieu of Equation (3), we further use this con- text vector which acts as a summary of the encoder to compute the probability of generating yt as:
hf = tanh (Wih? + Wace) (7)
c Ut Attention Scores ng XI\qFigure 5: Attention scores are computed by the current hidden vector and all the hidden vectors of encoder. Then, the encoder-side context vector ct is obtained in the form of a weighted sum, which is further used to predict yt.
(8) where Wo â R|Va|Ãn and W1, W2 â RnÃn are three parameter matrices, and e (yt) is a one-hot vector used to obtain ytâs probability.
# 3.4 Model Training
Our goal is to maximize the likelihood of the gen- erated logical forms given natural language utter- ances as input. So the objective function is:
minimize â log p (a|q) (q,a)âD (9)
where D is the set of all natural language-logical form training pairs, and p (a|q) is computed as shown in Equation (1). | 1601.01280#16 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 17 | where D is the set of all natural language-logical form training pairs, and p (a|q) is computed as shown in Equation (1).
The RMSProp algorithm (Tieleman and Hin- ton, 2012) is employed to solve this non-convex optimization problem. Moreover, dropout is used for regularizing the model (Zaremba et al., 2015). Speciï¬cally, dropout operators are used between different LSTM layers and for the hidden lay- ers before the softmax classiï¬ers. This technique can substantially reduce overï¬tting, especially on datasets of small size.
# 3.5 Inference
At test time, we predict the logical form for an in- put utterance q by:
(10) @ = arg maxp (a'|q) a!
where aâ represents a candidate output. How- ever, it is impractical to iterate over all possible results to obtain the optimal prediction. Accord- ing to Equation (I), we decompose the probabil- ity p(alq) so that we can use greedy search (or beam search) to generate tokens one by one. | 1601.01280#17 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 18 | Algorithm 1 Decoding for SEQ2TREE Input: q: Natural language utterance Output: 4G: Decoding result > Push the encoding result to a queue Q.init({hid : SeqEnc(q)}) > Decode until no more nonterminals while (c â Q.pop()) 4 @ do > Call sequence decoder c.child â SeqDec(c.hid) > Push new nonterminals to queue for n â nonterminal in c.child do Q.push({hid : HidVec(n)}) a@ â convert decoding tree to output sequence Serr awhrennr =
Algorithm 1 describes the decoding process for SEQ2TREE. The time complexity of both de- coders is O(|a|), where |a| is the length of out- put. The extra computation of SEQ2TREE com- pared with SEQ2SEQ is to maintain the nonter- minal queue, which can be ignored because most of time is spent on matrix operations. We imple- ment the hierarchical tree decoder in a batch mode, so that it can fully utilize GPUs. Speciï¬cally, as shown in Algorithm 1, every time we pop multi- ple nonterminals from the queue and decode these nonterminals in one batch.
# 3.6 Argument Identiï¬cation | 1601.01280#18 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 19 | # 3.6 Argument Identiï¬cation
The majority of semantic parsing datasets have been developed with question-answering in mind. In the typical application setting, natural language questions are mapped into logical forms and ex- ecuted on a knowledge base to obtain an answer. Due to the nature of the question-answering task, many natural language utterances contain entities or numbers that are often parsed as arguments in the logical form. Some of them are unavoidably rare or do not appear in the training set at all (this is especially true for small-scale datasets). Con- ventional sequence encoders simply replace rare words with a special unknown word symbol (Lu- ong et al., 2015a; Jean et al., 2015), which would be detrimental for semantic parsing.
We have developed a simple procedure for ar- gument identiï¬cation. Speciï¬cally, we identify entities and numbers in input questions and re- place them with their type names and unique IDs. For instance, we pre-process the training example âjobs with a salary of 40000â and its logical form âjob(ANS), salary greater than(ANS, 40000, year)â as âjobs with a salary of num0â | 1601.01280#19 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 20 | and âjob(ANS), salary greater than(ANS, num0, year)â. We use the pre-processed examples as training data. At inference time, we also mask en- tities and numbers with their types and IDs. Once we obtain the decoding result, a post-processing step recovers all the markers typei to their corre- sponding logical constants.
# 4 Experiments
We compare our method against multiple previ- ous systems on four datasets. We describe these datasets below, and present our experimental set- tings and results. Finally, we conduct model anal- ysis in order to understand what the model learns. The code is available at https://github. com/donglixp/lang2logic.
# 4.1 Datasets
Our model was trained on the following datasets, covering different domains and using different meaning representations. Examples for each do- main are shown in Table 1.
JOBS This benchmark dataset contains 640 queries to a database of job listings. Speciï¬cally, questions are paired with Prolog-style queries. We used the same training-test split as Zettlemoyer and Collins (2005) which contains 500 training and 140 test instances. Values for the variables company, degree, location, job area, and number are identiï¬ed. | 1601.01280#20 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 21 | GEO This is a standard semantic parsing bench- mark which contains 880 queries to a database of U.S. geography. GEO has 880 instances split into a training set of 680 training examples and 200 test examples (Zettlemoyer and Collins, 2005). We used the same meaning representation based on lambda-calculus as Kwiatkowski et al. (2011). Values for the variables city, state, country, river, and number are identiï¬ed.
ATIS This dataset has 5, 410 queries to a ï¬ight booking system. The standard split has 4, 480 training instances, 480 development instances, and 450 test instances. Sentences are paired with lambda-calculus expressions. Values for the vari- ables date, time, city, aircraft code, airport, airline, and number are identiï¬ed.
IFTTT Quirk et al. (2015) created this dataset by extracting a large number of if-this-then-that | 1601.01280#21 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 22 | IFTTT Quirk et al. (2015) created this dataset by extracting a large number of if-this-then-that
Example what microsoft jobs do not require a bscs? answer(company(J,âmicrosoftâ),job(J),not((req deg(J,âbscsâ)))) what is the population of the state with the largest area? (population:i (argmax $0 (state:t $0) (area:i $0))) dallas to san francisco leaving after 4 in the afternoon please (lambda $0 e (and (>(departure time $0) 1600:ti) (from $0 dallas:ci) (to $0 san francisco:ci))) Turn on heater when temperature drops below 58 degree TRIGGER: Weather - Current temperature drops below - ((Temperature (58)) (Degrees in (f))) ACTION: WeMo Insight Switch - Turn on - ((Which switch? (ââ)))
Table 1: Examples of natural language descriptions and their meaning representations from four datasets. The average length of input and output sequences is shown in the second column. | 1601.01280#22 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 23 | recipes from the IFTTT website1. Recipes are sim- ple programs with exactly one trigger and one ac- tion which users specify on the site. Whenever the conditions of the trigger are satisï¬ed, the action is performed. Actions typically revolve around home security (e.g., âturn on my lights when I ar- rive homeâ), automation (e.g., âtext me if the door opensâ), well-being (e.g., âremind me to drink water if Iâve been at a bar for more than two hoursâ), and so on. Triggers and actions are se- lected from different channels (160 in total) rep- resenting various types of services, devices (e.g., Android), and knowledge sources (such as ESPN In the dataset, there are 552 trigger or Gmail). functions from 128 channels, and 229 action func- tions from 99 channels. We used Quirk et al.âs (2015) original split which contains 77, 495 train- ing, 5, 171 development, and 4, 294 test examples. The IFTTT programs are represented as abstract syntax trees and are paired with natural language descriptions provided by users (see Table 1). Here, numbers and URLs are identiï¬ed.
# 4.2 Settings | 1601.01280#23 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 24 | # 4.2 Settings
Natural language sentences were lowercased; mis- spellings were corrected using a dictionary based on the Wikipedia list of common misspellings. Words were stemmed using NLTK (Bird et al., 2009). For IFTTT, we ï¬ltered tokens, channels and functions which appeared less than ï¬ve times in the training set. For the other datasets, we ï¬l- tered input words which did not occur at least two times in the training set, but kept all tokens in the logical forms. Plain string matching was em- ployed to identify augments as described in Sec- tion 3.6. More sophisticated approaches could be used, however we leave this future work.
Method COCKTAIL (Tang and Mooney, 2001) PRECISE (Popescu et al., 2003) ZC05 (Zettlemoyer and Collins, 2005) DCS+L (Liang et al., 2013) TISP (Zhao and Huang, 2015) SEQ2SEQ â attention â argument SEQ2TREE Accuracy 79.4 88.0 79.3 90.7 85.0 87.1 77.9 70.7 90.0 83.6 â attention
Table 2: Evaluation results on JOBS. | 1601.01280#24 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 25 | Table 2: Evaluation results on JOBS.
on the training set for JOBS and GEO. We used the standard development sets for ATIS and IFTTT. We used the RMSProp algorithm (with batch size set to 20) to update the parameters. The smoothing constant of RMSProp was 0.95. Gradients were clipped at 5 to alleviate the exploding gradient problem (Pascanu et al., 2013). Parameters were randomly initialized from a uniform distribution U (â0.08, 0.08). A two-layer LSTM was used for IFTTT, while a one-layer LSTM was employed for the other domains. The dropout rate was se- lected from {0.2, 0.3, 0.4, 0.5}. Dimensions of hidden vector and word embedding were selected from {150, 200, 250}. Early stopping was used Input sen- to determine the number of epochs. tences were reversed before feeding into the en- coder (Sutskever et al., 2014). We use greedy search to generate logical forms during inference. Notice that two decoders with shared word em- beddings were used to predict triggers and actions for IFTTT, and two softmax classiï¬ers are used to classify channels and functions.
# 4.3 Results
Model hyper-parameters were cross-validated
# 1http://www.ifttt.com | 1601.01280#25 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 26 | # 4.3 Results
Model hyper-parameters were cross-validated
# 1http://www.ifttt.com
We ï¬rst discuss the performance of our model on JOBS, GEO, and ATIS, and then examine our re- sults on IFTTT. Tables 2â4 present comparisons against a variety of systems previously described
Method SCISSOR (Ge and Mooney, 2005) KRISP (Kate and Mooney, 2006) WASP (Wong and Mooney, 2006) λ-WASP (Wong and Mooney, 2007) LNLZ08 (Lu et al., 2008) ZC05 (Zettlemoyer and Collins, 2005) ZC07 (Zettlemoyer and Collins, 2007) UBL (Kwiatkowski et al., 2010) FUBL (Kwiatkowski et al., 2011) KCAZ13 (Kwiatkowski et al., 2013) DCS+L (Liang et al., 2013) TISP (Zhao and Huang, 2015) SEQ2SEQ â attention â argument SEQ2TREE Accuracy 72.3 71.7 74.8 86.6 81.8 79.3 86.1 87.9 88.6 89.0 87.9 88.9 84.6 72.9 68.6 87.1 76.8 â attention | 1601.01280#26 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 27 | Table 3: Evaluation results on GEO. 10-fold cross- validation is used for the systems shown in the top half of the table. The standard split of ZC05 is used for all other systems.
Method ZC07 (Zettlemoyer and Collins, 2007) UBL (Kwiatkowski et al., 2010) FUBL (Kwiatkowski et al., 2011) GUSP-FULL (Poon, 2013) GUSP++ (Poon, 2013) TISP (Zhao and Huang, 2015) SEQ2SEQ â attention â argument SEQ2TREE Accuracy 84.6 71.4 82.8 74.8 83.5 84.2 84.2 75.7 72.3 84.6 77.5 â attention
Table 4: Evaluation results on ATIS.
in the literature. We report results with the full models (SEQ2SEQ, SEQ2TREE) and two abla- tion variants, i.e., without an attention mechanism (âattention) and without argument identiï¬cation (âargument). We report accuracy which is de- ï¬ned as the proportion of the input sentences that are correctly parsed to their gold standard logical forms. Notice that DCS+L, KCAZ13 and GUSP output answers directly, so accuracy in this setting is deï¬ned as the percentage of correct answers. | 1601.01280#27 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 28 | Overall, SEQ2TREE is superior to SEQ2SEQ. This is to be expected since SEQ2TREE ex- plicitly models compositional structure. On the JOBS and GEO datasets which contain logical forms with nested structures, SEQ2TREE out- performs SEQ2SEQ by 2.9% and 2.5%, respec- tively. SEQ2TREE achieves better accuracy over SEQ2SEQ on ATIS too, however, the difference is smaller, since ATIS is a simpler domain without complex nested structures. We ï¬nd that adding atMethod retrieval phrasal sync classiï¬er posclass SEQ2SEQ â attention â argument SEQ2TREE Channel 28.9 19.3 18.1 48.8 50.0 54.3 54.0 53.9 55.2 54.3 +Func 20.2 11.3 10.6 35.2 36.9 39.2 37.9 38.6 40.1 38.2 F1 41.7 35.3 35.1 48.4 49.3 50.1 49.8 49.7 50.4 50.0 â attention
# (a) Omit non-English. | 1601.01280#28 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 29 | # (a) Omit non-English.
Method retrieval phrasal sync classiï¬er posclass SEQ2SEQ â attention â argument SEQ2TREE Channel 36.8 27.8 26.7 64.8 67.2 68.8 68.7 68.8 69.6 68.7 +Func 25.4 16.4 15.5 47.2 50.4 50.5 48.9 50.4 51.4 49.5 F1 49.0 39.9 37.6 56.5 57.7 60.3 59.5 59.7 60.4 60.2 â attention
(b) Omit non-English & unintelligible.
Method retrieval phrasal sync classiï¬er posclass SEQ2SEQ â attention â argument SEQ2TREE Channel 43.3 37.2 36.5 79.3 81.4 87.8 88.3 86.8 89.7 87.6 +Func 32.3 23.5 24.1 66.2 71.0 75.2 73.8 74.9 78.4 74.9 F1 56.2 45.5 42.8 65.0 66.5 73.7 72.9 70.8 74.2 73.5 â attention
(c) ⥠3 turkers agree with gold.
Table 5: Evaluation results on IFTTT. | 1601.01280#29 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 30 | (c) ⥠3 turkers agree with gold.
Table 5: Evaluation results on IFTTT.
tention substantially improves performance on all three datasets. This underlines the importance of utilizing soft alignments between inputs and out- puts. We further analyze what the attention layer learns in Figure 6. Moreover, our results show that argument identiï¬cation is critical for small- scale datasets. For example, about 92% of city names appear less than 4 times in the GEO train- ing set, so it is difï¬cult to learn reliable parame- ters for these words. In relation to previous work, the proposed models achieve comparable or better performance. Importantly, we use the same frame- work (SEQ2SEQ or SEQ2TREE) across datasets and meaning representations (Prolog-style logi- cal forms in JOBS and lambda calculus in the other two datasets) without modiï¬cation. Despite this relatively simple approach, we observe that SEQ2TREE ranks second on JOBS, and is tied for ï¬rst place with ZC07 on ATIS. | 1601.01280#30 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 31 | (a) which jobs pay num0 that do not require a degid0 (b) whatâs ï¬rst class fare round trip from ci0 to ci1 (c) what is the earliest ï¬ight from ci0 to ci1 tomorrow (d) what is the highest elevation in the co0
argmin | | st me | and { Tight A â50 | t ( from ] Ca | 7 ( to 0. a d ( a ( "departure time. COU mie © bad âREPSEEREE A geen) $v 3
(eam $0 ( and ( place:t â ( Toc:t : coo ) ) ( elevation: $0 â <ns 24 sv
ANS ) fSalary_greater than ( ANS
nN v
Figure 6: Alignments (same color rectangles) produced by the attention mechanism (darker color rep- resents higher attention score). Input sentences are reversed and stemmed. Model output is shown for SEQ2SEQ (a, b) and SEQ2TREE (c, d). | 1601.01280#31 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 32 | We illustrate examples of alignments produced by SEQ2SEQ in Figures 6a and 6b. Alignments produced by SEQ2TREE are shown in Figures 6c and 6d. Matrices of attention scores are com- puted using Equation (5) and are represented in grayscale. Aligned input words and logical form predicates are enclosed in (same color) rectan- gles whose overlapping areas contain the attention scores. Also notice that attention scores are com- puted by LSTM hidden vectors which encode con- text information rather than just the words in their current positions. The examples demonstrate that the attention mechanism can successfully model the correspondence between sentences and logi- cal forms, capturing reordering (Figure 6b), many- to-many (Figure 6a), and many-to-one alignments (Figures 6c,d). | 1601.01280#32 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 33 | the proposed derivation. We compare our model against posclass, the method introduced in Quirk et al. and several of their baselines. posclass is reminiscent of KRISP (Kate and Mooney, 2006), it learns distributions over productions given in- put sentences represented as a bag of linguistic features. The retrieval baseline ï¬nds the closest description in the training data based on charac- ter string-edit-distance and returns the recipe for that training program. The phrasal method uses phrase-based machine translation to generate the recipe, whereas sync extracts synchronous gram- mar rules from the data, essentially recreating WASP (Wong and Mooney, 2006). Finally, they use a binary classiï¬er to predict whether a produc- tion should be present in the derivation tree corre- sponding to the description. | 1601.01280#33 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 34 | For IFTTT, we follow the same evaluation pro- tocol introduced in Quirk et al. (2015). The dataset is extremely noisy and measuring accu- racy is problematic since predicted abstract syn- tax trees (ASTs) almost never exactly match the gold standard. Quirk et al. view an AST as a set of productions and compute balanced F1 in- stead which we also adopt. The ï¬rst column in Table 5 shows the percentage of channels selected correctly for both triggers and actions. The sec- ond column measures accuracy for both channels and functions. The last column shows balanced F1 against the gold tree over all productions in | 1601.01280#34 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 35 | Quirk et al. (2015) report results on the full test data and smaller subsets after noise ï¬lter- ing, e.g., when non-English and unintelligible de- scriptions are removed (Tables 5a and 5b). They also ran their system on a high-quality subset of description-program pairs which were found in the gold standard and at least three humans managed to independently reproduce (Table 5c). Across all subsets our models outperforms posclass and re- lated baselines. Again we observe that SEQ2TREE consistently outperforms SEQ2SEQ, albeit with a small margin. Compared to the previous datasets, the attention mechanism and our argument identiï¬cation method yield less of an improvement. This may be due to the size of Quirk et al. (2015) and the way it was created â user curated descrip- tions are often of low quality, and thus align very loosely to their corresponding ASTs.
# 4.4 Error Analysis
Finally, we inspected the output of our model in order to identify the most common causes of errors which we summarize below. | 1601.01280#35 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 36 | # 4.4 Error Analysis
Finally, we inspected the output of our model in order to identify the most common causes of errors which we summarize below.
Under-Mapping The attention model used in our experiments does not take the alignment his- tory into consideration. So, some question words, expecially in longer questions, may be ignored in the decoding process. This is a common prob- lem for encoder-decoder models and can be ad- dressed by explicitly modelling the decoding cov- erage of the source words (Tu et al., 2016; Cohn et al., 2016). Keeping track of the attention his- tory would help adjust future attention and guide the decoder towards untranslated source words.
Argument Identiï¬cation Some mentions are incorrectly identiï¬ed as arguments. For example, the word may is sometimes identiï¬ed as a month when it is simply a modal verb. Moreover, some argument mentions are ambiguous. For instance, 6 oâclock can be used to express either 6 am or 6 pm. We could disambiguate arguments based on contextual information. The execution results of logical forms could also help prune unreasonable arguments. | 1601.01280#36 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 37 | Rare Words Because the data size of JOBS, GEO, and ATIS is relatively small, some question words are rare in the training set, which makes it hard to estimate reliable parameters for them. One solution would be to learn word embeddings on unannotated text data, and then use these as pre- trained vectors for question words.
# 5 Conclusions
In this paper we presented an encoder-decoder neural network model for mapping natural lan- guage descriptions to their meaning representa- tions. We encode natural language utterances into vectors and generate their corresponding log- ical forms as sequences or trees using recur- rent neural networks with long short-term mem- ory units. Experimental results show that en- hancing the model with a hierarchical tree de- coder and an attention mechanism improves performance across the board. Extensive compar- isons with previous methods show that our ap- proach performs competitively, without recourse to domain- or representation-speciï¬c features. Di- rections for future work are many and varied. For example, it would be interesting to learn a model from question-answer pairs without access to tar- get logical forms. Beyond semantic parsing, we would also like to apply our SEQ2TREE model to related structured prediction tasks such as con- stituency parsing. | 1601.01280#37 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 38 | Acknowledgments We would like to thank Luke Zettlemoyer and Tom Kwiatkowski for shar- ing the ATIS dataset. The support of the European Research Council under award number 681760 âTranslating Multiple Modalities into Textâ is gratefully acknowledged.
# References
[Andreas et al.2013] Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as ma- chine translation. In Proceedings of the 51st ACL, pages 47â52, Soï¬a, Bulgaria.
and Luke Bootstrapping semantic Zettlemoyer. In Proceedings of the parsers from conversations. 2011 EMNLP, pages 421â432, Edinburgh, United Kingdom.
and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL, 1(1):49â62.
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR, San Diego, California.
[Bird et al.2009] Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. OâReilly Media. | 1601.01280#38 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 39 | [Bird et al.2009] Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. OâReilly Media.
and Alexander Yates. 2013. Semantic parsing freebase: Towards In 2nd Joint Con- open-domain semantic parsing. ference on Lexical and Computational Semantics, pages 328â338, Atlanta, Georgia.
[Chen and Mooney2011] David L. Chen and Ray- mond J. Mooney. 2011. Learning to interpret nat- ural language navigation instructions from observa- tions. In Proceedings of the 15th AAAI, pages 859â 865, San Francisco, California.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrien- boer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio.
2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 EMNLP, pages 1724â 1734, Doha, Qatar.
[Clarke et al.2010] James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the worldâs response. In Pro- ceedings of CONLL, pages 18â27, Uppsala, Swe- den. | 1601.01280#39 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 40 | [Cohn et al.2016] Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. Incorporating structural alignment biases into an attentional neu- In Proceedings of the 2016 ral translation model. NAACL, San Diego, California.
[Ge and Mooney2005] Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of CoNLL, pages 9â16, Ann Arbor, Michigan.
[Goldwasser and Roth2011] Dan Goldwasser and Dan Roth. 2011. Learning from natural instructions. In Proceedings of the 22nd IJCAI, pages 1794â1800, Barcelona, Spain.
Phil Blunsom, Nando de Freitas, and Karl Moritz Hermann. 2014. A deep architecture for semantic parsing. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, Atlanta, Georgia.
[He and Young2006] Yulan He and Steve Young. 2006. Semantic processing using the hidden vector state model. Speech Communication, 48(3-4):262â275. | 1601.01280#40 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 41 | [He and Young2006] Yulan He and Steve Young. 2006. Semantic processing using the hidden vector state model. Speech Communication, 48(3-4):262â275.
[Hermann et al.2015] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Ad- vances in Neural Information Processing Systems 28, pages 1684â1692. Curran Associates, Inc.
[Jean et al.2015] S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural ma- chine translation. In Proceedings of 53rd ACL and 7th IJCNLP, pages 1â10, Beijing, China.
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous In Proceedings of the 2013 translation models. EMNLP, pages 1700â1709, Seattle, Washington.
and Li Fei-Fei. 2015. Deep visual-semantic alignments In Proceedings for generating image descriptions. of CVPR, pages 3128â3137, Boston, Massachusetts. | 1601.01280#41 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 42 | and Li Fei-Fei. 2015. Deep visual-semantic alignments In Proceedings for generating image descriptions. of CVPR, pages 3128â3137, Boston, Massachusetts.
[Kate and Mooney2006] Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning se- mantic parsers. In Proceedings of the 21st COLING and 44th ACL, pages 913â920, Sydney, Australia.
[Kate et al.2005] Rohit J. Kate, Yuk Wah Wong, and Raymond J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the 20th AAAI, pages 1062â1068, Pittsburgh, Pennsyl- vania.
Krish- 2012. Weakly namurthy and Tom Mitchell. In Pro- supervised training of semantic parsers. ceedings of the 2012 EMNLP, pages 754â765, Jeju Island, Korea.
Luke Zettlemoyer, Sharon Goldwater, and Mark Steed- man. 2010. Inducing probabilistic CCG grammars from logical form with higher-order uniï¬cation. In Proceedings of the 2010 EMNLP, pages 1223â1233, Cambridge, Massachusetts. | 1601.01280#42 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 43 | Luke Zettlemoyer, Sharon Goldwater, and Mark Steed- Lexical generalization in CCG man. In Pro- grammar induction for semantic parsing. ceedings of the 2011 EMNLP, pages 1512â1523, Edinburgh, United Kingdom.
[Kwiatkowski et al.2013] Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-ï¬y ontology In Proceedings of the 2013 EMNLP, matching. pages 1545â1556, Seattle, Washington.
[Liang et al.2013] Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based com- positional semantics. Computational Linguistics, 39(2):389â446.
[Lu et al.2008] Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representa- In Proceedings of the 2008 EMNLP, pages tions. 783â792, Honolulu, Hawaii. | 1601.01280#43 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 44 | Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wo- jciech Zaremba. 2015a. Addressing the rare word problem in neural machine translation. In Proceed- ings of the 53rd ACL and 7th IJCNLP, pages 11â19, Beijing, China.
[Luong et al.2015b] Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective ap- proaches to attention-based neural machine trans- lation. In Proceedings of the 2015 EMNLP, pages 1412â1421, Lisbon, Portugal.
[Mei et al.2016] Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to ac- In Proceedings of the 30th AAAI, tion sequences. Phoenix, Arizona. to appear.
[Miller et al.1996] Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully sta- tistical approach to natural language interfaces. In ACL, pages 55â61. | 1601.01280#44 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 45 | [Pascanu et al.2013] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difï¬culty of train- ing recurrent neural networks. In Proceedings of the 30th ICML, pages 1310â1318, Atlanta, Georgia.
[Poon2013] Hoifung Poon. 2013. Grounded unsuper- vised semantic parsing. In Proceedings of the 51st ACL, pages 933â943, Soï¬a, Bulgaria.
[Popescu et al.2003] Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th IUI, pages 149â157, Miami, Florida.
[Quirk et al.2015] Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learn- ing semantic parsers for if-this-then-that recipes. In Proceedings of 53rd ACL and 7th IJCNLP, pages 878â888, Beijing, China.
[Reddy et al.2014] Siva Reddy, Mirella Lapata, and Large-scale semantic 2014. TACL, Mark Steedman. parsing without question-answer pairs. 2(Oct):377â392. | 1601.01280#45 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 46 | [Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceed- ings of the 2015 EMNLP, pages 379â389, Lisbon, Portugal.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learn- ing with neural networks. In Advances in Neural In- formation Processing Systems 27, pages 3104â3112. Curran Associates, Inc.
[Tang and Mooney2000] Lappoon R. Tang and Ray- mond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and rela- tional learning for semantic parsing. In Proceedings of the 2000 EMNLP, pages 133â141, Hong Kong, China.
[Tang and Mooney2001] Lappoon R. Tang and Ray- mond J. Mooney. 2001. Using multiple clause con- structors in inductive logic programming for seman- tic parsing. In Proceedings of the 12th ECML, pages 466â477, Freiburg, Germany. | 1601.01280#46 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 47 | [Thomspon and Mooney2003] Cynthia A. Thomspon and Raymond J. Mooney. 2003. Acquiring word- meaning mappings for natural language interfaces. Journal of Artiï¬cal Intelligence Research, 18:1â44.
[Tieleman and Hinton2012] T. Tieleman and G. Hinton. 2012. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. Tech- nical report.
[Tu et al.2016] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling cover- age for neural machine translation. In Proceedings of the 54th ACL, Berlin, Germany.
[Vinyals et al.2015a] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015a. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28, pages 2755â2763. Curran Associates, Inc. | 1601.01280#47 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 48 | [Vinyals et al.2015b] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In Pro- ceedings of CVPR, pages 3156â3164, Boston, Mas- sachusetts.
[Wong and Mooney2006] Yuk Wah Wong and Ray- mond J. Mooney. 2006. Learning for semantic pars- ing with statistical machine translation. In Proceed- ings of the 2006 NAACL, pages 439â446, New York, New York.
[Wong and Mooney2007] Yuk Wah Wong and Ray- mond J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calcu- lus. In Proceedings of the 45th ACL, pages 960â967, Prague, Czech Republic.
[Woods1973] W. A. Woods. 1973. Progress in natu- ral language understanding: An application to lunar geology. In Proceedings of the June 4-8, 1973, Na- tional Computer Conference and Exposition, pages 441â450, New York, NY. | 1601.01280#48 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 49 | [Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Ruslan Kyunghyun Cho, Aaron Courville, Salakhudinov, Rich Zemel, and Yoshua Ben- gio. 2015. Show, attend and tell: Neural image In Pro- caption generation with visual attention. ceedings of the 32nd ICML, pages 2048â2057, Lille, France.
Ilya Recur- Sutskever, and Oriol Vinyals. rent neural network regularization. In Proceedings of the ICLR, San Diego, California.
[Zelle and Mooney1996] John M. Zelle and Ray- mond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Pro- ceedings of the 19th AAAI, pages 1050â1055, Port- land, Oregon.
[Zettlemoyer and Collins2005] Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sen- tences to logical form: Structured classiï¬cation with In Proceedings probabilistic categorial grammars. of the 21st UAI, pages 658â666, Toronto, ON. | 1601.01280#49 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.01280 | 50 | [Zettlemoyer and Collins2007] Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed In In CCG grammars for parsing to logical form. Proceedings of the EMNLP-CoNLL, pages 678â687, Prague, Czech Republic.
[Zhao and Huang2015] Kai Zhao and Liang Huang. 2015. Type-driven incremental semantic parsing In Proceedings of the 2015 with polymorphism. NAACL, pages 1416â1421, Denver, Colorado. | 1601.01280#50 | Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations. | http://arxiv.org/pdf/1601.01280 | Li Dong, Mirella Lapata | cs.CL | Accepted by ACL-16 | null | cs.CL | 20160106 | 20160606 | [] |
1601.00257 | 0 | 6 1 0 2 n a J 6 ] c q - r g [
2 v 7 5 2 0 0 . 1 0 6 1 : v i X r a
Preprint typeset in JHEP style - HYPER VERSION
# Modave Lectures on Applied AdS/CFT with Numerics â
# Minyong Guo
Department of Physics, Beijing Normal University, Beijing, 100875, China [email protected]
# Chao Niu
School of Physics and Chemistry, Gwangju Institute of Science and Technology, Gwangju 500-712, Korea [email protected]
# Yu Tian
School of Physics, University of Chinese Academy of Sciences, Beijing 100049, China State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China [email protected]
# Hongbao Zhang
Department of Physics, Beijing Normal University, Beijing, 100875, China Theoretische Natuurkunde, Vrije Universiteit Brussel, and The International Solvay Institutes, Pleinlaan 2, B-1050 Brussels, Belgium [email protected] | 1601.00257#0 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 1 | Abstract: These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor manâs review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum grav- ity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving diï¬erential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superï¬uid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superï¬uid density and particle density, namely Ïs = Ï, and the saturation to the predicted value 1â by conformal ï¬eld theory for the sound speed in the 2 large chemical potential limit.
âBased on the series of lectures given by Hongbao Zhang at the Eleventh International Modave Summer School on Mathematical Physics, held in Modave, Belgium, September 2015.
# Contents | 1601.00257#1 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 2 | âBased on the series of lectures given by Hongbao Zhang at the Eleventh International Modave Summer School on Mathematical Physics, held in Modave, Belgium, September 2015.
# Contents
Introduction 2.1 De Sitter space: Meta-observables 2.2 Minkowski space: S-Matrix program 2.3 Anti-de Sitter space: AdS/CFT correspondence 3.1 What AdS/CFT is 3.2 Why AdS/CFT is reliable 3.3 How useful AdS/CFT is 4.1 Newton-Raphson method 4.2 Pseudo-spectral method 4.3 Runge-Kutta method 5.1 Variation of action, Boundary terms, and Choice of ensemble 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization 5.3 Background solution, Free energy, and Phase transition 5.4 Linear response theory, Optical conductivity, and Superï¬uid density 5.5 Time domain analysis, Normal modes, and Sound speed 1 2 3 4 6 6 6 8 9 9 10 10 11 12 12 13 13 16 19
1.
2. Quantum Gravity
3. Applied AdS/CFT
4. Numerics for Solving Diï¬erential Equations
5. Holographic Superï¬uid at Zero Temperature
# 6. Concluding Remarks
# 1. Introduction | 1601.00257#2 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 3 | 4. Numerics for Solving Diï¬erential Equations
5. Holographic Superï¬uid at Zero Temperature
# 6. Concluding Remarks
# 1. Introduction
Diï¬erent from the other more formal topics in this summer school, the emphasis of these lectures is on the applications of AdS/CFT correspondence and the involved numerical tech- niques. As theoretical physicists, we generically have a theory, or a paradigm as simple as possible, but the real world is always highly sophisticated. So it is usually not suï¬cient for us to play only with our analytical techniques when we try to have a better understanding of the rich world by our beautiful theory. This is how computational physics comes in the lives of theoretical physicists. AdS/CFT correspondence, as an explicit holographic implementation
â 1 â
21 | 1601.00257#3 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 4 | â 1 â
21
of quantum gravity in anti-de Sitter space, has recently emerged as a powerful tool for one to address some universal behaviors of strongly coupled many body systems, which otherwise would not be amenable to the conventional approaches. Furthermore, applied AdS/CFT has been entering the era of Computational Holography, where numerics plays a more and more important role in such ongoing endeavors. Implementing those well developed techniques in Numerical Relativity is highly desirable but generically required to be geared since AdS has its own diï¬culties. In the course of attacking these unique diï¬culties, some new numerical schemes and computational techniques have also been devised. These lectures are intended as a basic introduction to the necessary numerics in applied AdS/CFT in particular for those beginning practitioners in this active ï¬eld. Hopefully in the end, the readers can appreciate the signiï¬cance of numerics in connecting AdS/CFT to the real world at least as we do. | 1601.00257#4 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 5 | In the next section, we shall ï¬rst present a poor manâs review of the current status for quantum gravity, where AdS/CFT stands out as the well formulated quantum gravity in anti-de Sitter space. Then we provide a brief introduction to applied AdS/CFT in Section 3, which includes what AdS/CFT is, why AdS/CFT is reliable, and how useful AdS/CFT is. In Section 4, we shall present the main numerical methods for solving diï¬erential equations, which is supposed to be the central task in applied AdS/CFT. Then we take the zero temperature holographic superï¬uid as a concrete application of AdS/CFT with numerics in Section 5, where not only will some relevant concepts be introduced but also some new results will be presented for the ï¬rst time. We conclude these lecture notes with some remarks in the end.
# 2. Quantum Gravity | 1601.00257#5 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
1601.00257 | 6 | # 2. Quantum Gravity
The very theme in physics is to unify a variety of seemingly distinct phenomena by as a few principles as possible, which can help us to build up a sense of safety while being faced up with the unknown world. This may be regarded as another contribution of the uniï¬cation in physics to our society on top of its various induced technology innovations. With a series of achievements along the road to uniï¬cation in physics, we now end up with the two distinct entities, namely quantum ï¬eld theory and general relativity. | 1601.00257#6 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | [
{
"id": "1510.02804"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.