id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1601.06759#30
Pixel Recurrent Neural Networks
This coincides with the size of the respective recep- tive ï¬ elds: the Diagonal BiLSTM has a global view, the Row LSTM has a partially occluded view and the Pixel- CNN sees the fewest pixels in the context. This suggests that effectively capturing a large receptive ï¬ eld is impor- tant. Figure 7 (left) shows CIFAR-10 samples generated Table 4. Test set performance of different models on MNIST in nats (negative log-likelihood). Prior results taken from [1] (Salakhutdinov & Hinton, 2009), [2] (Murray & Salakhutdinov, 2009), [3] (Uria et al., 2014), [4] (Raiko et al., 2014), [5] (Rezende et al., 2014), [6] (Salimans et al., 2015), [7] (Gregor et al., 2014), [8] (Germain et al., 2015), [9] (Gregor et al., 2015). from the Diagonal BiLSTM.
1601.06759#29
1601.06759#31
1601.06759
[ "1511.01844" ]
1601.06759#31
Pixel Recurrent Neural Networks
# 5.7. ImageNet Although to our knowledge the are no published results on the ILSVRC ImageNet dataset (Russakovsky et al., 2015) that we can compare our models with, we give our Ima- Pixel Recurrent Neural Networks Figure 8. Samples from models trained on ImageNet 64x64 images. Left: normal model, right: multi-scale model. The single-scale model trained on 64x64 images is less able to capture global structure than the 32x32 model. The multi-scale model seems to resolve this problem. Although these models get similar performance in log-likelihood, the samples on the right do seem globally more coherent. Model Uniform Distribution: Multivariate Gaussian: NICE [1]: Deep Diffusion [2]: Deep GMMs [3]: RIDE [4]: PixelCNN: Row LSTM: Diagonal BiLSTM:
1601.06759#30
1601.06759#32
1601.06759
[ "1511.01844" ]
1601.06759#32
Pixel Recurrent Neural Networks
NLL Test (Train) 8.00 4.70 4.48 4.20 4.00 3.47 3.14 (3.08) 3.07 (3.00) 3.00 (2.93) occluded # completions # original wena Gm ee Ee TAS A bali Mh SN AYN pin 3s 2 ids Table 5. Test set performance of different models on CIFAR-10 in bits/dim. For our models we give training performance in brack- ets. [1] (Dinh et al., 2014), [2] (Sohl-Dickstein et al., 2015), [3] (van den Oord & Schrauwen, 2014a), [4] personal communication (Theis & Bethge, 2015). Image size NLL Validation (Train) 32x32: 64x64: 3.86 (3.83) 3.63 (3.57) Figure 9. Image completions sampled from a model that was trained on 32x32 ImageNet images. Note that diversity of the completions is high, which can be attributed to the log-likelihood loss function used in this generative model, as it encourages mod- els with high entropy. As these are sampled from the model, we can easily generate millions of different completions. It is also interesting to see that textures such as water, wood and shrubbery are also inputed relative well (see Figure 1). Table 6. Negative log-likelihood performance on 32à 32 and 64à 64 ImageNet in bits/dim. geNet log-likelihood performance in Table 6 (without data augmentation). On ImageNet the current PixelRNNs do not appear to overï¬ t, as we saw that their validation per- formance improved with size and depth. The main con- straint on model size are currently computation time and GPU memory. Note that the ImageNet models are in general less com- pressible than the CIFAR-10 images. ImageNet has greater variety of images, and the CIFAR-10 images were most likely resized with a different algorithm than the one we used for ImageNet images. The ImageNet images are less blurry, which means neighboring pixels are less correlated to each other and thus less predictable.
1601.06759#31
1601.06759#33
1601.06759
[ "1511.01844" ]
1601.06759#33
Pixel Recurrent Neural Networks
Because the down- sampling method can inï¬ uence the compression perfor- mance, we have made the used downsampled images avail- able1. Figure 7 (right) shows 32 à 32 samples drawn from our model trained on ImageNet. Figure 8 shows 64 à 64 sam- ples from the same model with and without multi-scale 1http://image-net.org/small/download.php Pixel Recurrent Neural Networks conditioning. Finally, we also show image completions sampled from the model in Figure 9. # 6. Conclusion Graves, Alex and Schmidhuber, J¨urgen. Ofï¬
1601.06759#32
1601.06759#34
1601.06759
[ "1511.01844" ]
1601.06759#34
Pixel Recurrent Neural Networks
ine handwrit- ing recognition with multidimensional recurrent neural networks. In Advances in Neural Information Process- ing Systems, 2009. In this paper we signiï¬ cantly improve and build upon deep recurrent neural networks as generative models for natural images. We have described novel two-dimensional LSTM layers: the Row LSTM and the Diagonal BiLSTM, that scale more easily to larger datasets. The models were trained to model the raw RGB pixel values. We treated the pixel values as discrete random variables by using a soft- max layer in the conditional distributions. We employed masked convolutions to allow PixelRNNs to model full de- pendencies between the color channels. We proposed and evaluated architectural improvements in these models re- sulting in PixelRNNs with up to 12 LSTM layers.
1601.06759#33
1601.06759#35
1601.06759
[ "1511.01844" ]
1601.06759#35
Pixel Recurrent Neural Networks
Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive net- works. In Proceedings of the 31st International Confer- ence on Machine Learning, 2014. Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. DRAW: A recurrent neural network for image generation. Proceedings of the 32nd International Con- ference on Machine Learning, 2015.
1601.06759#34
1601.06759#36
1601.06759
[ "1511.01844" ]
1601.06759#36
Pixel Recurrent Neural Networks
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. We have shown that the PixelRNNs signiï¬ cantly improve the state of the art on the MNIST and CIFAR-10 datasets. We also provide new benchmarks for generative image modeling on the ImageNet dataset. Based on the samples and completions drawn from the models we can conclude that the PixelRNNs are able to model both spatially local and long-range correlations and are able to produce images that are sharp and coherent. Given that these models im- prove as we make them larger and that there is practically unlimited data available to train on, more computation and larger models are likely to further improve the results. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 1997. Kalchbrenner, Nal and Blunsom, Phil. Recurrent continu- ous translation models. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Pro- cessing, 2013. Kalchbrenner, Nal, Danihelka, Ivo, and Graves, Alex. arXiv preprint Grid long short-term memory. arXiv:1507.01526, 2015. # Acknowledgements The authors would like to thank Shakir Mohamed and Guil- laume Desjardins for helpful input on this paper and Lu- cas Theis, Alex Graves, Karen Simonyan, Lasse Espeholt, Danilo Rezende, Karol Gregor and Ivo Danihelka for in- sightful discussions. # References Kingma, Diederik P and Welling, Max. Auto-encoding arXiv preprint arXiv:1312.6114, variational bayes. 2013. Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009. Larochelle, Hugo and Murray, Iain. The neural autore- gressive distribution estimator. The Journal of Machine Learning Research, 2011. Bengio, Yoshua and Bengio, Samy.
1601.06759#35
1601.06759#37
1601.06759
[ "1511.01844" ]
1601.06759#37
Pixel Recurrent Neural Networks
Modeling high- dimensional discrete data with multi-layer neural net- works. pp. 400â 406. MIT Press, 2000. LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. Dinh, Laurent, Krueger, David, and Bengio, Yoshua. NICE: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. Evaluat- ing probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, 2009.
1601.06759#36
1601.06759#38
1601.06759
[ "1511.01844" ]
1601.06759#38
Pixel Recurrent Neural Networks
Iain, and Larochelle, Hugo. MADE: Masked autoencoder for dis- tribution estimation. arXiv preprint arXiv:1502.03509, 2015. Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Neal, Radford M. Connectionist learning of belief net- works. Artiï¬ cial intelligence, 1992.
1601.06759#37
1601.06759#39
1601.06759
[ "1511.01844" ]
1601.06759#39
Pixel Recurrent Neural Networks
Raiko, Tapani, Li, Yao, Cho, Kyunghyun, and Bengio, Yoshua. Iterative neural autoregressive distribution es- In Advances in Neural Information timator NADE-k. Processing Systems, 2014. Pixel Recurrent Neural Networks Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference In Proceedings of the 31st in deep generative models. International Conference on Machine Learning, 2014. Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. the 31st International Conference on Machine Learning, 2014. van den Oord, A¨aron and Schrauwen, Benjamin. Factoring variations in natural images with deep gaussian mixture models. In Advances in Neural Information Processing Systems, 2014a. van den Oord, A¨aron and Schrauwen, Benjamin. The student-t mixture as a natural image patch prior with ap- plication to image compression. The Journal of Machine Learning Research, 2014b. Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltz- mann machines.
1601.06759#38
1601.06759#40
1601.06759
[ "1511.01844" ]
1601.06759#40
Pixel Recurrent Neural Networks
In International Conference on Artiï¬ - cial Intelligence and Statistics, 2009. Salakhutdinov, Ruslan and Murray, Iain. On the quantita- tive analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, 2008. Zhang, Yu, Chen, Guoguo, Yu, Dong, Yao, Kaisheng, Khu- danpur, Sanjeev, and Glass, James. Highway long short- term memory RNNs for distant speech recognition. In Proceedings of the International Conference on Acous- tics, Speech and Signal Processing, 2016. Salimans, Tim, Kingma, Diederik P, and Welling, Max.
1601.06759#39
1601.06759#41
1601.06759
[ "1511.01844" ]
1601.06759#41
Pixel Recurrent Neural Networks
Markov chain monte carlo and variational inference: Bridging the gap. Proceedings of the 32nd International Conference on Machine Learning, 2015. Sohl-Dickstein, Jascha, Weiss, Eric A., Maheswaranathan, Niru, and Ganguli, Surya. Deep unsupervised learning using nonequilibrium thermodynamics. Proceedings of the 32nd International Conference on Machine Learn- ing, 2015. Stollenga, Marijn F, Byeon, Wonmin, Liwicki, Marcus, and Schmidhuber, Juergen. Parallel multi-dimensional lstm, with application to fast biomedical volumetric im- In Advances in Neural Information age segmentation. Processing Systems 28. 2015. Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E.
1601.06759#40
1601.06759#42
1601.06759
[ "1511.01844" ]
1601.06759#42
Pixel Recurrent Neural Networks
Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning, 2011. Theis, Lucas and Bethge, Matthias. Generative image mod- eling using spatial LSTMs. In Advances in Neural Infor- mation Processing Systems, 2015. Theis, Lucas, van den Oord, A¨aron, and Bethge, Matthias. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. Iain, and Larochelle, Hugo.
1601.06759#41
1601.06759#43
1601.06759
[ "1511.01844" ]
1601.06759#43
Pixel Recurrent Neural Networks
RNADE: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, 2013. Uria, Benigno, Murray, Iain, and Larochelle, Hugo. A deep and tractable density estimator. In Proceedings of Pixel Recurrent Neural Networks See Sea ah = AA Tels ae 45 3 GS a SARs lve TG peer reer Ae Mises Ss Scott ak HSA) Sn Shoes i ey 2 Cob Ae i AREEA 2 ik is Rati iO FE Rg as al 62 Sa bes o>) AR ae pai ee BE ew ep OB AS ale te eK ASI BOBS of eer eee Rls a OM eed Se pie Ge Gils Bes tea ike sate EMRE EY Sas kare Ste ca Figure 10.
1601.06759#42
1601.06759#44
1601.06759
[ "1511.01844" ]
1601.06759#44
Pixel Recurrent Neural Networks
Additional samples from a model trained on ImageNet 32x32 (right) images.
1601.06759#43
1601.06759
[ "1511.01844" ]
1601.01705#0
Learning to Compose Neural Networks for Question Answering
6 1 0 2 n u J 7 ] L C . s c [ 4 v 5 0 7 1 0 . 1 0 6 1 : v i X r a # Learning to Compose Neural Networks for Question Answering Jacob Andreas and Marcus Rohrbach and Trevor Darrell and Dan Klein Department of Electrical Engineering and Computer Sciences University of California, Berkeley {jda,rohrbach,trevor,klein}@eecs.berkeley.edu # Abstract
1601.01705#1
1601.01705
[ "1511.05234" ]
1601.01705#1
Learning to Compose Neural Networks for Question Answering
We describe a question answering model that applies to both images and structured knowl- edge bases. The model uses natural lan- guage strings to automatically assemble neu- ral networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly param- eters via reinforcement learning, with only (world, question, answer) triples as supervi- sion. Our approach, which we term a dynamic neural module network, achieves state-of-the- art results on benchmark datasets in both vi- sual and structured domains.
1601.01705#0
1601.01705#2
1601.01705
[ "1511.05234" ]
1601.01705#2
Learning to Compose Neural Networks for Question Answering
What cities are in Georgia? Atlanta â t Module inventory (Section 4.1) â and Et =f = (a) Network layout (Section 4.2) and Find[eity] relate[in] (b) Jookup[Georgia] (d) Knowledge source Figure 1: A learned syntactic analysis (a) is used to assemble a collection of neural modules (b) into a deep neural network (c), and applied to a world representation (d) to produce an answer. 1 # Introduction
1601.01705#1
1601.01705#3
1601.01705
[ "1511.05234" ]
1601.01705#3
Learning to Compose Neural Networks for Question Answering
This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and struc- tured knowledge bases. The model translates from questions to dynamically assembled neural net- works, then applies these networks to world rep- resentations (images or knowledge bases) to pro- duce answers. We take advantage of two largely independent lines of work: on one hand, an exten- sive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous represen- tations. Previous work has used manually-speciï¬ ed modular structures for visual learning (Andreas et al., 2016). Here we: â
1601.01705#2
1601.01705#4
1601.01705
[ "1511.05234" ]
1601.01705#4
Learning to Compose Neural Networks for Question Answering
¢ learn a network structure predictor jointly with module parameters themselves â ¢ extend visual primitives from previous work to reason over structured world representations Training data consists of (world, question, answer) triples: our approach requires no supervision of net- work layouts. We achieve state-of-the-art perfor- mance on two markedly different question answer- ing tasks: one with questions about natural im- ages, and another with more compositional ques- tions about United States geography.1 Our model has two components, trained jointly: ï¬ rst, a collection of neural â modulesâ that can be freely composed (Figure 1a); second, a network lay- out predictor that assembles modules into complete deep networks tailored to each question (Figure 1b). # 2 Deep networks as functional programs We begin with a high-level discussion of the kinds of composed networks we would like to learn. 1We have released our code at http://github.com/ jacobandreas/nmn2 Andreas et al. (2016) describe a heuristic ap- proach for decomposing visual question answering tasks into sequence of modular sub-problems. For example, the question What color is the bird? might be answered in two steps: ï¬
1601.01705#3
1601.01705#5
1601.01705
[ "1511.05234" ]
1601.01705#5
Learning to Compose Neural Networks for Question Answering
rst, â where is the bird?â (Figure 2a), second, â what color is that part of the image?â (Figure 2c). This ï¬ rst step, a generic mod- ule called find, can be expressed as a fragment of a neural network that maps from image features and a lexical item (here bird) to a distribution over pix- els. This operation is commonly referred to as the attention mechanism, and is a standard tool for ma- nipulating images (Xu et al., 2015) and text repre- sentations (Hermann et al., 2015).
1601.01705#4
1601.01705#6
1601.01705
[ "1511.05234" ]
1601.01705#6
Learning to Compose Neural Networks for Question Answering
The ï¬ rst contribution of this paper is an exten- sion and generalization of this mechanism to enable fully-differentiable reasoning about more structured semantic representations. Figure 2b shows how the same module can be used to focus on the entity Georgia in a non-visual grounding domain; more generally, by representing every entity in the uni- verse of discourse as a feature vector, we can obtain a distribution over entities that corresponds roughly to a logical set-valued denotation. Having obtained such a distribution, existing neu- ral approaches use it to immediately compute a weighted average of image features and project back into a labeling decisionâ a describe module (Fig- ure 2c). But the logical perspective suggests a num- ber of novel modules that might operate on atten- tions: e.g. combining them (by analogy to conjunc- tion or disjunction) or inspecting them directly with- out a return to feature space (by analogy to quantiï¬ - cation, Figure 2d). These modules are discussed in detail in Section 4. Unlike their formal counterparts, they are differentiable end-to-end, facilitating their integration into learned models. Building on previ- ous work, we learn behavior for a collection of het- erogeneous modules from (world, question, answer) triples. The second contribution of this paper is a model for learning to assemble such modules composition- ally. Isolated modules are of limited useâ to ob- tain expressive power comparable to either formal approaches or monolithic deep networks, they must be composed into larger structures. Figure 2 shows simple examples of composed structures, but for realistic question-answering tasks, even larger net-
1601.01705#5
1601.01705#7
1601.01705
[ "1511.05234" ]
1601.01705#7
Learning to Compose Neural Networks for Question Answering
black and white true t waists (d) describe (c) state (a) (b) Montgomery Georgia Atlanta Gee® Ges® @ss® Figure 2: Simple neural module networks, corresponding to the questions What color is the bird? and Are there any states? (a) A neural find module for computing an attention over pixels. (b) The same operation applied to a knowledge base. (c) Using an attention produced by a lower module to identify the color of the region of the image attended to. (d) Performing quantiï¬ cation by evaluating an attention directly.
1601.01705#6
1601.01705#8
1601.01705
[ "1511.05234" ]
1601.01705#8
Learning to Compose Neural Networks for Question Answering
works are required. Thus our goal is to automati- cally induce variable-free, tree-structured computa- tion descriptors. We can use a familiar functional notation from formal semantics (e.g. Liang et al., 2011) to represent these computations.2 We write the two examples in Figure 2 as (describe[color] find[bird]) and (exists find[state]) respectively. These are network layouts: they spec- ify a structure for arranging modules (and their lex- ical parameters) into a complete network. Andreas et al. (2016) use hand-written rules to deterministi- cally transform dependency trees into layouts, and are restricted to producing simple structures like the above for non-synthetic data. For full generality, we will need to solve harder problems, like transform- ing What cities are in Georgia? (Figure 1) into
1601.01705#7
1601.01705#9
1601.01705
[ "1511.05234" ]
1601.01705#9
Learning to Compose Neural Networks for Question Answering
(and find[city] (relate[in] lookup[Georgia])) In this paper, we present a model for learning to se- lect such structures from a set of automatically gen- erated candidates. We call this model a dynamic neural module network. 2But note that unlike formal semantics, the behavior of the primitive functions here is itself unknown. # 3 Related work There is an extensive literature on database ques- tion answering, in which strings are mapped to log- ical forms, then evaluated by a black-box execu- tion model to produce answers. Supervision may be provided either by annotated logical forms (Wong and Mooney, 2007; Kwiatkowski et al., 2010; An- dreas et al., 2013) or from (world, question, answer) triples alone (Liang et al., 2011; Pasupat and Liang, 2015). In general the set of primitive functions from which these logical forms can be assembled is ï¬ xed, but one recent line of work focuses on induc- ing new predicates functions automatically, either from perceptual features (Krishnamurthy and Kol- lar, 2013) or the underlying schema (Kwiatkowski et al., 2013). The model we describe in this paper has a uniï¬ ed framework for handling both the per- ceptual and schema cases, and differs from existing work primarily in learning a differentiable execution model with continuous evaluation results. Neural models for question answering are also a subject of current interest. These include approaches that model the task directly as a multiclass classiï¬ - cation problem (Iyyer et al., 2014), models that at- tempt to embed questions and answers in a shared vector space (Bordes et al., 2014) and attentional models that select words from documents sources (Hermann et al., 2015). Such approaches generally require that answers can be retrieved directly based on surface linguistic features, without requiring in- termediate computation. A more structured ap- proach described by Yin et al. (2015) learns a query execution model for database tables without any nat- ural language component.
1601.01705#8
1601.01705#10
1601.01705
[ "1511.05234" ]
1601.01705#10
Learning to Compose Neural Networks for Question Answering
Previous efforts toward unifying formal logic and representation learning in- clude those of Grefenstette (2013), Krishnamurthy and Mitchell (2013), Lewis and Steedman (2013), and Beltagy et al. (2013). The visually-grounded component of this work relies on recent advances in convolutional net- works for computer vision (Simonyan and Zisser- man, 2014), and in particular the fact that late convo- lutional layers in networks trained for image recog- nition contain rich features useful for other vision tasks while preserving spatial information. These features have been used for both image captioning (Xu et al., 2015) and visual QA (Yang et al., 2015). Most previous approaches to visual question an- swering either apply a recurrent model to deep rep- resentations of both the image and the question (Ren et al., 2015; Malinowski et al., 2015), or use the question to compute an attention over the input im- age, and then answer based on both the question and the image features attended to (Yang et al., 2015; Xu and Saenko, 2015). Other approaches include the simple classiï¬ cation model described by Zhou et al. (2015) and the dynamic parameter prediction network described by Noh et al. (2015). All of these models assume that a ï¬ xed computation can be performed on the image and question to compute the answer, rather than adapting the structure of the computation to the question. As noted, Andreas et al. (2016) previously con- sidered a simple generalization of these attentional approaches in which small variations in the net- work structure per-question were permitted, with the structure chosen by (deterministic) syntactic pro- cessing of questions. Other approaches in this gen- eral family include the â universal parserâ sketched by Bottou (2014), the graph transformer networks of Bottou et al. (1997), the knowledge-based neu- ral networks of Towell and Shavlik (1994) and the recursive neural networks of Socher et al. (2013), which use a ï¬ xed tree structure to perform further linguistic analysis without any external world rep- resentation.
1601.01705#9
1601.01705#11
1601.01705
[ "1511.05234" ]
1601.01705#11
Learning to Compose Neural Networks for Question Answering
We are unaware of previous work that simultaneously learns both parameters for and struc- tures of instance-speciï¬ c networks. # 4 Model Recall that our goal is to map from questions and world representations to answers. This process in- volves the following variables: 1. w a world representation 2. x a question 3. y an answer 4. z a network layout 5. θ a collection of model parameters Our model is built around two distributions: a lay- out model p(z|x; 6) which chooses a layout for a sentence, and a execution model p-(y|ww; 9.) which applies the network specified by z to w. For ease of presentation, we introduce these mod- els in reverse order.
1601.01705#10
1601.01705#12
1601.01705
[ "1511.05234" ]
1601.01705#12
Learning to Compose Neural Networks for Question Answering
We ï¬ rst imagine that z is always observed, and in|Section 4.1|describe how to evalu- ate and learn modules parameterized by 6. within fixed structures. In we move to the real scenario, where z is unknown. We describe how to predict layouts from questions and learn 6, and 62 jointly without layout supervision. # 4.1 Evaluating modules Given a layout z, we assemble the corresponding modules into a full neural network (Figure Tf), and apply it to the knowledge representation. Interme- diate results flow between modules until an answer is produced at the root. We denote the output of the network with layout z on input world w as [z]w: when explicitly referencing the substructure of z, we can alternatively write [m(h!, h?)] for a top-level module m with submodule outputs h! and h?. We then define the execution model: (1) # pe(ylw) = ([2]w)y (This assumes that the root module of z produces a distribution over labels y.) The set of possible layouts z is restricted by module type constraints: some modules (like find above) operate directly on the input representation, while others (like describe above) also depend on input from speciï¬ c earlier modules. Two base types are considered in this pa- per are Attention (a distribution over pixels or enti- ties) and Labels (a distribution over answers). Parameters are tied across multiple instances of the same module, so different instantiated networks may share some parameters but not others. Modules have both parameter arguments (shown in square brackets) and ordinary inputs (shown in parenthe- ses). Parameter arguments, like the running bird example in are provided by the layout, and are used to specialize module behavior for par- ticular lexical items. Ordinary inputs are the re- sult of computation lower in the network. In ad- dition to parameter-specific weights, modules have global weights shared across all instances of the module (but not shared with other modules). We write A,a,B,b,... for global weights and uâ ,v! for weights associated with the parameter argument 7. © and © denote (possibly broadcasted) elementwise addition and multiplication respectively. The com- plete set of global weights and parameter-specific weights constitutes 6..
1601.01705#11
1601.01705#13
1601.01705
[ "1511.05234" ]
1601.01705#13
Learning to Compose Neural Networks for Question Answering
Every module has access to the world representation, represented as a collection of vectors w1, w2, . . . (or W expressed as a matrix). The nonlinearity Ï denotes a rectiï¬ ed linear unit. The modules used in this paper are shown below, with names and type constraints in the ï¬ rst row and a description of the moduleâ s computation following. (â Attention) Lookup lookup[i] produces an attention focused entirely at the index f (i), where the relationship f between words and positions in the input map is known ahead of time (e.g. string matches on database ï¬ elds). = ef (i) (2) [Lookup [i] ] where ei is the basis vector that is 1 in the ith position and 0 elsewhere.
1601.01705#12
1601.01705#14
1601.01705
[ "1511.05234" ]
1601.01705#14
Learning to Compose Neural Networks for Question Answering
(â Attention) Find find[i] computes a distribution over indices by con- catenating the parameter argument with each position of the input feature map, and passing the concatenated vector through a MLP: [ina tii] = softmax(a © o(Bu' ® CW @d)) (3) # [ina tii] Relate (Attention > Attention) relate directs focus from one region of the input to another. It behaves much like the find module, but also conditions its behavior on the current region of attention h. Let w(h) = So), hew*, where hj, is the k** element of h. Then, [relate [i] (h)]] = softmax(a © o(Bu' @ CW @ Dti(h) Ge)) (4) And (Attention* â Attention) and performs an operation analogous to set intersec- tion for attentions. The analogy to probabilistic logic suggests multiplying probabilities: [ana(h},h?,...)J =k Ono-:- (5) (Attention â Labels) Describe describe[i] computes a weighted average of w under the input attention. This average is then used to predict an answer representation. With ¯w as above, describe[i](h) = softmax(AÏ (B ¯w(h) + vi)) (6) Exists (Attention â Labels) exists is the existential quantifier, and inspects the incoming attention directly to produce a label, rather than an intermediate feature vector like describe: Jexists)(h)] = softmax (( max he)a+ ) (7)
1601.01705#13
1601.01705#15
1601.01705
[ "1511.05234" ]
1601.01705#15
Learning to Compose Neural Networks for Question Answering
What cities are in Georgia? are Georgia? be eS. 1 (b) what 1 u 1 â I / v °â relate[in] find[city] i (c) Lookup[ Georgia] 2 a relate[in] ° 3 find[city] 2 lookup[ Georgia] (d) 2 relate[in} s¢ Lookup[Georgia] (a) Figure 3: Generation of layout candidates. The input sentence (a) is represented as a dependency parse (b). Fragments of this dependency parse are then associated with appropriate modules (c), and these fragments are assembled into full layouts (d). With z observed, the model we have described so far corresponds largely to that of Andreas et al. (2016), though the module inventory is differentâ in particular, our new exists and relate modules do not depend on the two-dimensional spatial struc- ture of the input. This enables generalization to non- visual world representations. Learning in this simplified setting is straightfor- ward. Assuming the top-level module in each layout is a describe or exists module, the fully- instan- tiated network corresponds to a distribution over la- bels conditioned on layouts. To train, we maximize DX (wyy,z) 108 P2(y|w; Ve) directly. This can be under- stood as a parameter-tying scheme, where the deci- sions about which parameters to tie are governed by the observed layouts z. # 4.2 Assembling networks Next we describe the layout model p(z|; 6). We first use a fixed syntactic parse to generate a small set of candidate layouts, analogously to the way a semantic grammar generates candidate semantic parses in previous work (Berant and Liang, 2014). A semantic parse differs from a syntactic parse in two primary ways. First, lexical items must be mapped onto a (possibly smaller) set of semantic primitives. Second, these semantic primitives must be combined into a structure that closely, but not ex- actly, parallels the structure provided by syntax. For example, state and province might need to be identi- ï¬ ed with the same ï¬ eld in a database schema, while all states have a capital might need to be identiï¬ ed with the correct (in situ) quantiï¬ er scope.
1601.01705#14
1601.01705#16
1601.01705
[ "1511.05234" ]
1601.01705#16
Learning to Compose Neural Networks for Question Answering
While we cannot avoid the structure selection problem, continuous representations simplify the lexical selection problem. For modules that accept a vector parameter, we associate these parameters with words rather than semantic tokens, and thus turn the combinatorial optimization problem asso- ciated with lexicon induction into a continuous one. Now, in order to learn that province and state have the same denotation, it is sufï¬ cient to learn that their associated parameters are close in some embedding spaceâ a task amenable to gradient descent. (Note that this is easy only in an optimizability sense, and not an information-theoretic oneâ we must still learn to associate each independent lexical item with the correct vector.) The remaining combinatorial problem is to arrange the provided lexical items into the right computational structure. In this respect, layout prediction is more like syntactic parsing than ordinary semantic parsing, and we can rely on an off-the-shelf syntactic parser to get most of the way there. In this work, syntactic structure is provided by the Stanford dependency parser (De Marneffe and Manning, 2008). The construction of layout candidates is depicted in Figure 3, and proceeds as follows:
1601.01705#15
1601.01705#17
1601.01705
[ "1511.05234" ]
1601.01705#17
Learning to Compose Neural Networks for Question Answering
1. Represent the input sentence as a dependency tree. 2. Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula. 3. Associate each of these with a layout frag- ment: Ordinary nouns and verbs are mapped to a single find module. Proper nouns to a sin- gle lookup module. Prepositional phrases are mapped to a depth-2 fragment, with a relate module for the preposition above a find mod- ule for the enclosed head noun.
1601.01705#16
1601.01705#18
1601.01705
[ "1511.05234" ]
1601.01705#18
Learning to Compose Neural Networks for Question Answering
4. Form subsets of this set of layout fragments. For each subset, construct a layout candidate by joining all fragments with an and module, and inserting either a measure or describe module at the top (each subset thus results in two parse candidates.) All layouts resulting from this process feature a relatively ï¬ at tree structure with at most one con- junction and one quantiï¬ er. This is a strong sim- plifying assumption, but appears sufï¬ cient to cover most of the examples that appear in both of our tasks. As our approach includes both categories, re- lations and simple quantiï¬ cation, the range of phe- nomena considered is generally broader than pre- vious perceptually-grounded QA work (Krishna- murthy and Kollar, 2013; Matuszek et al., 2012). Having generated a set of candidate parses, we need to score them. This is a ranking problem; as in the rest of our approach, we solve it using standard neural machinery. In particular, we pro- duce an LSTM representation of the question, a feature-based representation of the query, and pass both representations through a multilayer perceptron (MLP). The query feature vector includes indicators on the number of modules of each type present, as well as their associated parameter arguments. While one can easily imagine a more sophisticated parse- scoring model, this simple approach works well for our tasks. Formally, for a question x, let hq(x) be an LSTM encoding of the question (i.e. the last hidden layer of an LSTM applied word-by-word to the input ques- tion). Let {z1, z2, . . .} be the proposed layouts for x, and let f (zi) be a feature vector representing the ith layout. Then the score s(zi|x) for the layout zi is s(zi|v) = a'o(Bhg(x) +Cf(u)+4) (8) ie. the output of an MLP with inputs h,(x) and f(z), and parameters 0, = {a,B,C,d}. Finally, we normalize these scores to obtain a distribution: n t;0¢) = ene) [> 8(zile) (9) j=l D(%
1601.01705#17
1601.01705#19
1601.01705
[ "1511.05234" ]
1601.01705#19
Learning to Compose Neural Networks for Question Answering
Having defined a layout selection module p(z|z;9¢) and a network execution model pz(y|w;9e), we are ready to define a model for predicting answers given only (world, question) pairs. The key constraint is that we want to min- imize evaluations of p.(y|w;@-) (which involves expensive application of a deep network to a large input representation), but can tractably evaluate p(z|x;4¢) for all z (which involves application of a shallow network to a relatively small set of candidates). This is the opposite of the situation usually encountered semantic parsing, where calls to the query execution model are fast but the set of candidate parses is too large to score exhaustively. In fact, the problem more closely resembles the scenario faced by agents in the reinforcement learn- ing setting (where it is cheap to score actions, but potentially expensive to execute them and obtain re- wards). We adopt a common approach from that lit- erature, and express our model as a stochastic pol- icy. Under this policy, we first sample a layout z from a distribution p(z|2;9¢), and then apply z to the knowledge source and obtain a distribution over answers p(y|z, w; 0c). After z is chosen, we can train the execution model directly by maximizing log p(y|z, w; 6.) with respect to 6. as before (this is ordinary backprop- agation). Because the hard selection of z is non- differentiable, we optimize p(z|x; 67) using a policy gradient method. The gradient of the reward surface J with respect to the parameters of the policy is VJ(8¢) = E[V log p(z|x; 4c) - r] (this is the REINFORCE rule (Williams, 1992)). Here the expectation is taken with respect to rollouts of the policy, and r is the reward. Because our goal is to select the network that makes the most accurate predictions, we take the reward to be identically the negative log-probability from the execution phase, i.e. [(V log p(z|a; Oe) - log p(y|z,w;4.)] CL Thus the update to the layout-scoring model at each timestep is simply the gradient of the log-probability of the chosen layout, scaled by the accuracy of that layoutâ
1601.01705#18
1601.01705#20
1601.01705
[ "1511.05234" ]
1601.01705#20
Learning to Compose Neural Networks for Question Answering
s predictions. At training time, we approxi- mate the expectation with a single rollout, so at each step we update 6) in the direction (V log p(z|x; 9¢))- log p(y|z, w; 9) for a single z ~ p(z|x;6¢). A. and 0, are optimized using ADADELTA with p = 0.95, e = leâ 6 and gradient clipping at a norm of 10. (10)
1601.01705#19
1601.01705#21
1601.01705
[ "1511.05234" ]
1601.01705#21
Learning to Compose Neural Networks for Question Answering
What is in the sheepâ s ear? What color is she wearing? What is the man dragging? (describe[what] (describe[color] (describe[what] (and find[sheep] find[ear])) find[wear]) find[man]) tag white boat (board) | i d i » a Figure 4: Sample outputs for the visual question answering task. The second row shows the ï¬ nal attention provided as in- put to the top-level describe module. For the ï¬ rst two exam- ples, the model produces reasonable parses, attends to the cor- rect region of the images (the ear and the womanâ s clothing), and generates the correct answer. In the third image, the verb is discarded and a wrong answer is produced. # 5 Experiments
1601.01705#20
1601.01705#22
1601.01705
[ "1511.05234" ]
1601.01705#22
Learning to Compose Neural Networks for Question Answering
The framework described in this paper is general, and we are interested in how well it performs on datasets of varying domain, size and linguistic com- plexity. To that end, we evaluate our model on tasks at opposite extremes of both these criteria: a large visual question answering dataset, and a small col- lection of more structured geography questions. # 5.1 Questions about images Our ï¬ rst task is the recently-introduced Visual Ques- tion Answering challenge (VQA) (Antol et al., 2015). The VQA dataset consists of more than 200,000 images paired with human-annotated ques- tions and answers, as in Figure 4. We use the VQA 1.0 release, employing the de- velopment set for model selection and hyperparam- eter tuning, and reporting ï¬ nal results from the eval- uation server on the test-standard set. For the ex- periments described in this section, the input feature representations wi are computed by the the ï¬ fth con- volutional layer of a 16-layer VGGNet after pooling (Simonyan and Zisserman, 2014). Input images are scaled to 448à 448 before computing their represen- tations.
1601.01705#21
1601.01705#23
1601.01705
[ "1511.05234" ]
1601.01705#23
Learning to Compose Neural Networks for Question Answering
We found that performance on this task was test-dev test-std Yes/No Number Other All All Zhou (2015) 76.6 Noh (2015) 80.7 Yang (2015) 79.3 81.2 NMN 81.1 D-NMN 35.0 37.2 36.6 38.0 38.6 42.6 41.7 46.1 44.0 45.5 55.7 57.2 58.7 58.6 59.4 55.9 57.4 58.9 58.7 59.4 Table 1: Results on the VQA test server.
1601.01705#22
1601.01705#24
1601.01705
[ "1511.05234" ]
1601.01705#24
Learning to Compose Neural Networks for Question Answering
NMN is the parameter-tying model from Andreas et al. (2015), and D-NMN is the model described in this paper. best if the candidate layouts were relatively simple: only describe, and and find modules are used, and layouts contain at most two conjuncts. One weakness of this basic framework is a difï¬ - culty modeling prior knowledge about answers (of the form most bears are brown). This kinds of lin- guistic â priorâ is essential for the VQA task, and easily incorporated. We simply introduce an extra hidden layer for recombining the ï¬ nal module net- work output with the input sentence representation hq(x) (see Equation 8), replacing Equation 1 with: log pz(y|w, x) = (Ahq(x) + B (12) # z]w)y (Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to inï¬ uence the answer directly. Results are shown in Table 1. The use of dynamic networks provides a small gain, most noticeably on â
1601.01705#23
1601.01705#25
1601.01705
[ "1511.05234" ]
1601.01705#25
Learning to Compose Neural Networks for Question Answering
otherâ questions. We achieve state-of-the-art re- sults on this task, outperforming a highly effective visual bag-of-words model (Zhou et al., 2015), a model with dynamic network parameter prediction (but ï¬ xed network structure) (Noh et al., 2015), a more conventional attentional model (Yang et al., 2015), and a previous approach using neural mod- ule networks with no structure prediction (Andreas et al., 2016). Some examples are shown in Figure 4. In general, the model learns to focus on the correct region of the image, and tends to consider a broad window around the region. This facilitates answering questions like Where is the cat?, which requires knowledge of the surroundings as well as the object in question. Accuracy Model GeoQA GeoQA+Q LSP-F LSP-W NMN D-NMN 48 51 51.7 54.3 â
1601.01705#24
1601.01705#26
1601.01705
[ "1511.05234" ]
1601.01705#26
Learning to Compose Neural Networks for Question Answering
â 35.7 42.9 Table 2: Results on the GeoQA dataset, and the GeoQA dataset with quantiï¬ cation. Our approach outperforms both a purely logical model (LSP-F) and a model with learned percep- tual predicates (LSP-W) on the original dataset, and a ï¬ xed- structure NMN under both evaluation conditions. # 5.2 Questions about geography The next set of experiments we consider focuses on GeoQA, a geographical question-answering task ï¬
1601.01705#25
1601.01705#27
1601.01705
[ "1511.05234" ]
1601.01705#27
Learning to Compose Neural Networks for Question Answering
rst introduced by Krishnamurthy and Kollar (2013). This task was originally paired with a vi- sual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical se- mantic parser backed by a database, and another which induces logical predicates using linear clas- siï¬ ers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical se- mantics, as well as strictly logical approaches. The GeoQA domain consists of a set of entities (e.g. states, cities, parks) which participate in vari- ous relations (e.g. north-of, capital-of). Here we take the world representation to consist of two pieces: a set of category features (used by the find module) and a different set of relational features (used by the relate module). For our experiments, we use a sub- set of the features originally used by Krishnamurthy et al. The original dataset includes no quantiï¬
1601.01705#26
1601.01705#28
1601.01705
[ "1511.05234" ]
1601.01705#28
Learning to Compose Neural Networks for Question Answering
ers, and treats the questions What cities are in Texas? and Are there any cities in Texas? identically. Be- cause we are interested in testing the parserâ s ability to predict a variety of different structures, we intro- duce a new version of the dataset, GeoQA+Q, which distinguishes these two cases, and expects a Boolean answer to questions of the second kind. Results are shown in Table 2. As in the orig- inal work, we report the results of leave-one- environment-out cross-validation on the set of 10 en-
1601.01705#27
1601.01705#29
1601.01705
[ "1511.05234" ]
1601.01705#29
Learning to Compose Neural Networks for Question Answering
Is Key Largo an island? (exists (and lookup[key-largo] find[island])) yes: correct What national parks are in Florida? (and find[park] (relate[in] lookup[florida])) everglades: correct What are some beaches in Florida? (exists (and lookup[beach] (relate[in] lookup[florida]))) yes (daytona-beach): wrong parse What beach city is there in Florida? (and lookup[beach] lookup[city] (relate[in] lookup[florida])) [none] (daytona-beach): wrong module behavior Figure 5: Example layouts and answers selected by the model on the GeoQA dataset. For incorrect predictions, the correct answer is shown in parentheses. vironments. Our dynamic model (D-NMN) outper- forms both the logical (LSP-F) and perceptual mod- els (LSP-W) described by (Krishnamurthy and Kol- lar, 2013), as well as a ï¬ xed-structure neural mod- ule net (NMN). This improvement is particularly notable on the dataset with quantiï¬ ers, where dy- namic structure prediction produces a 20% relative improvement over the ï¬ xed baseline. A variety of predicted layouts are shown in Figure 5. # 6 Conclusion
1601.01705#28
1601.01705#30
1601.01705
[ "1511.05234" ]
1601.01705#30
Learning to Compose Neural Networks for Question Answering
We have introduced a new model, the dynamic neu- ral module network, for answering queries about both structured and unstructured sources of informa- tion. Given only (question, world, answer) triples as training data, the model learns to assemble neu- ral networks on the ï¬ y from an inventory of neural models, and simultaneously learns weights for these modules so that they can be composed into novel structures. Our approach achieves state-of-the-art results on two tasks. We believe that the success of this work derives from two factors: Continuous representations improve the expres- siveness and learnability of semantic parsers: by re- placing discrete predicates with differentiable neural network fragments, we bypass the challenging com- binatorial optimization problem associated with in- duction of a semantic lexicon. In structured world representations, neural predicate representations al- low the model to invent reusable attributes and re- lations not expressed in the schema. Perhaps more importantly, we can extend compositional question- answering machinery to complex, continuous world representations like images. Semantic structure prediction improves general- ization in deep networks: by replacing a ï¬ xed net- work topology with a dynamic one, we can tailor the computation performed to each problem instance, using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters. In practice, this re- sults in considerable gains in speed and sample efï¬ - ciency, even with very little training data. These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation.
1601.01705#29
1601.01705#31
1601.01705
[ "1511.05234" ]
1601.01705#31
Learning to Compose Neural Networks for Question Answering
# Acknowledgments JA is supported by a National Science Foundation Graduate Fellowship. MR is supported by a fellow- ship within the FIT weltweit-Program of the German Academic Exchange Service (DAAD). This work was additionally supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS- 1427425 and IIS-1212798, and the Berkeley Vision and Learning Center. # References Jacob Andreas, Andreas Vlachos, and Stephen Clark.
1601.01705#30
1601.01705#32
1601.01705
[ "1511.05234" ]
1601.01705#32
Learning to Compose Neural Networks for Question Answering
In 2013. Semantic parsing as machine translation. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Soï¬ a, Bulgaria. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Pro- ceedings of the Conference on Computer Vision and Pattern Recognition. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answer- In Proceedings of the International Conference ing. on Computer Vision. Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Gar- rette, Katrin Erk, and Raymond Mooney. 2013.
1601.01705#31
1601.01705#33
1601.01705
[ "1511.05234" ]
1601.01705#33
Learning to Compose Neural Networks for Question Answering
Mon- tague meets markov: Deep semantics with probabilis- tic logical form. Proceedings of the Joint Conference on Distributional and Logical Semantics, pages 11â 21. Jonathan Berant and Percy Liang. 2014. Semantic pars- In Proceedings of the Annual ing via paraphrasing. Meeting of the Association for Computational Linguis- tics, volume 7, page 92. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing. L´eon Bottou, Yoshua Bengio, and Yann Le Cun. 1997. Global training of document processing systems us- ing graph transformer networks. In Proceedings of the Conference on Computer Vision and Pattern Recogni- tion, pages 489â
1601.01705#32
1601.01705#34
1601.01705
[ "1511.05234" ]
1601.01705#34
Learning to Compose Neural Networks for Question Answering
494. IEEE. L´eon Bottou. 2014. From machine learning to machine reasoning. Machine learning, 94(2):133â 149. Marie-Catherine De Marneffe and Christopher D Man- ning. 2008. The Stanford typed dependencies repre- sentation. In Proceedings of the International Confer- ence on Computational Linguistics, pages 1â 8. Edward Grefenstette. 2013. Towards a formal distribu- tional semantics: Simulating logical calculi with ten- sors. Joint Conference on Lexical and Computational Semantics.
1601.01705#33
1601.01705#35
1601.01705
[ "1511.05234" ]
1601.01705#35
Learning to Compose Neural Networks for Question Answering
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684â 1692. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014.
1601.01705#34
1601.01705#36
1601.01705
[ "1511.05234" ]
1601.01705#36
Learning to Compose Neural Networks for Question Answering
A neu- ral network for factoid question answering over para- graphs. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing. Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics. Jayant Krishnamurthy and Tom Mitchell. 2013. Vec- tor space semantic parsing: A framework for compo- In Proceedings of the sitional vector space models. ACL Workshop on Continuous Vector Space Models and their Compositionality.
1601.01705#35
1601.01705#37
1601.01705
[ "1511.05234" ]
1601.01705#37
Learning to Compose Neural Networks for Question Answering
Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilis- tic CCG grammars from logical form with higher- In Proceedings of the Conference order uniï¬ cation. on Empirical Methods in Natural Language Process- ing, pages 1223â 1233, Cambridge, Massachusetts. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013.
1601.01705#36
1601.01705#38
1601.01705
[ "1511.05234" ]
1601.01705#38
Learning to Compose Neural Networks for Question Answering
Scaling semantic parsers with on- the-ï¬ y ontology matching. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing. Mike Lewis and Mark Steedman. 2013. Combining distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179â 192. 2011. Learning dependency-based compositional semantics. In Proceedings of the Human Language Technology Conference of the Association for Computational Lin- guistics, pages 590â 599, Portland, Oregon. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015.
1601.01705#37
1601.01705#39
1601.01705
[ "1511.05234" ]
1601.01705#39
Learning to Compose Neural Networks for Question Answering
Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the International Conference on Computer Vision. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettle- moyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded at- tribute learning. In International Conference on Ma- chine Learning. Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. 2015. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756. Panupong Pasupat and Percy Liang. 2015.
1601.01705#38
1601.01705#40
1601.01705
[ "1511.05234" ]
1601.01705#40
Learning to Compose Neural Networks for Question Answering
Composi- tional semantic parsing on semi-structured tables. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Ex- ploring models and data for image question answer- In Advances in Neural Information Processing ing. Systems. K Simonyan and A Zisserman. 2014. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013.
1601.01705#39
1601.01705#41
1601.01705
[ "1511.05234" ]
1601.01705#41
Learning to Compose Neural Networks for Question Answering
Parsing with compositional vector grammars. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics. 1994. Knowledge-based artiï¬ cial neural networks. Artiï¬ cial Intelligence, 70(1):119â 165. Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256. Yuk Wah Wong and Raymond J. Mooney. 2007.
1601.01705#40
1601.01705#42
1601.01705
[ "1511.05234" ]
1601.01705#42
Learning to Compose Neural Networks for Question Answering
Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, volume 45, page 960. 2015. Ask, attend and answer: Exploring question-guided spatial atten- arXiv preprint tion for visual question answering. arXiv:1511.05234. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015.
1601.01705#41
1601.01705#43
1601.01705
[ "1511.05234" ]
1601.01705#43
Learning to Compose Neural Networks for Question Answering
Show, attend and tell: Neural image caption generation with visual In International Conference on Machine attention. Learning. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention net- 2015. works for image question answering. arXiv preprint arXiv:1511.02274. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965. Matthew D Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701.
1601.01705#42
1601.01705#44
1601.01705
[ "1511.05234" ]
1601.01705#44
Learning to Compose Neural Networks for Question Answering
ADADELTA: An arXiv preprint Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple base- arXiv preprint line for visual question answering. arXiv:1512.02167.
1601.01705#43
1601.01705
[ "1511.05234" ]
1601.00257#0
Modave Lectures on Applied AdS/CFT with Numerics
6 1 0 2 n a J 6 ] c q - r g [ 2 v 7 5 2 0 0 . 1 0 6 1 : v i X r a Preprint typeset in JHEP style - HYPER VERSION # Modave Lectures on Applied AdS/CFT with Numerics â # Minyong Guo Department of Physics, Beijing Normal University, Beijing, 100875, China [email protected] # Chao Niu School of Physics and Chemistry, Gwangju Institute of Science and Technology, Gwangju 500-712, Korea [email protected] # Yu Tian School of Physics, University of Chinese Academy of Sciences, Beijing 100049, China State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China [email protected] # Hongbao Zhang Department of Physics, Beijing Normal University, Beijing, 100875, China Theoretische Natuurkunde, Vrije Universiteit Brussel, and The International Solvay Institutes, Pleinlaan 2, B-1050 Brussels, Belgium [email protected] Abstract: These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor manâ s review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum grav- ity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving diï¬ erential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superï¬
1601.00257#1
1601.00257
[ "1510.02804" ]
1601.00257#1
Modave Lectures on Applied AdS/CFT with Numerics
uid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superï¬ uid density and particle density, namely Ï s = Ï , and the saturation to the predicted value 1â by conformal ï¬ eld theory for the sound speed in the 2 large chemical potential limit. â Based on the series of lectures given by Hongbao Zhang at the Eleventh International Modave Summer School on Mathematical Physics, held in Modave, Belgium, September 2015. # Contents Introduction 2.1 De Sitter space: Meta-observables 2.2 Minkowski space: S-Matrix program 2.3 Anti-de Sitter space: AdS/CFT correspondence 3.1 What AdS/CFT is 3.2 Why AdS/CFT is reliable 3.3 How useful AdS/CFT is 4.1 Newton-Raphson method 4.2 Pseudo-spectral method 4.3 Runge-Kutta method 5.1 Variation of action, Boundary terms, and Choice of ensemble 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization 5.3 Background solution, Free energy, and Phase transition 5.4 Linear response theory, Optical conductivity, and Superï¬ uid density 5.5 Time domain analysis, Normal modes, and Sound speed 1 2 3 4 6 6 6 8 9 9 10 10 11 12 12 13 13 16 19 1. 2.
1601.00257#0
1601.00257#2
1601.00257
[ "1510.02804" ]
1601.00257#2
Modave Lectures on Applied AdS/CFT with Numerics
Quantum Gravity 3. Applied AdS/CFT 4. Numerics for Solving Diï¬ erential Equations 5. Holographic Superï¬ uid at Zero Temperature # 6. Concluding Remarks # 1. Introduction Diï¬ erent from the other more formal topics in this summer school, the emphasis of these lectures is on the applications of AdS/CFT correspondence and the involved numerical tech- niques. As theoretical physicists, we generically have a theory, or a paradigm as simple as possible, but the real world is always highly sophisticated.
1601.00257#1
1601.00257#3
1601.00257
[ "1510.02804" ]
1601.00257#3
Modave Lectures on Applied AdS/CFT with Numerics
So it is usually not suï¬ cient for us to play only with our analytical techniques when we try to have a better understanding of the rich world by our beautiful theory. This is how computational physics comes in the lives of theoretical physicists. AdS/CFT correspondence, as an explicit holographic implementation â 1 â 21 of quantum gravity in anti-de Sitter space, has recently emerged as a powerful tool for one to address some universal behaviors of strongly coupled many body systems, which otherwise would not be amenable to the conventional approaches. Furthermore, applied AdS/CFT has been entering the era of Computational Holography, where numerics plays a more and more important role in such ongoing endeavors. Implementing those well developed techniques in Numerical Relativity is highly desirable but generically required to be geared since AdS has its own diï¬
1601.00257#2
1601.00257#4
1601.00257
[ "1510.02804" ]
1601.00257#4
Modave Lectures on Applied AdS/CFT with Numerics
culties. In the course of attacking these unique diï¬ culties, some new numerical schemes and computational techniques have also been devised. These lectures are intended as a basic introduction to the necessary numerics in applied AdS/CFT in particular for those beginning practitioners in this active ï¬ eld. Hopefully in the end, the readers can appreciate the signiï¬ cance of numerics in connecting AdS/CFT to the real world at least as we do. In the next section, we shall ï¬
1601.00257#3
1601.00257#5
1601.00257
[ "1510.02804" ]
1601.00257#5
Modave Lectures on Applied AdS/CFT with Numerics
rst present a poor manâ s review of the current status for quantum gravity, where AdS/CFT stands out as the well formulated quantum gravity in anti-de Sitter space. Then we provide a brief introduction to applied AdS/CFT in Section 3, which includes what AdS/CFT is, why AdS/CFT is reliable, and how useful AdS/CFT is. In Section 4, we shall present the main numerical methods for solving diï¬ erential equations, which is supposed to be the central task in applied AdS/CFT. Then we take the zero temperature holographic superï¬ uid as a concrete application of AdS/CFT with numerics in Section 5, where not only will some relevant concepts be introduced but also some new results will be presented for the ï¬
1601.00257#4
1601.00257#6
1601.00257
[ "1510.02804" ]
1601.00257#6
Modave Lectures on Applied AdS/CFT with Numerics
rst time. We conclude these lecture notes with some remarks in the end. # 2. Quantum Gravity The very theme in physics is to unify a variety of seemingly distinct phenomena by as a few principles as possible, which can help us to build up a sense of safety while being faced up with the unknown world. This may be regarded as another contribution of the uniï¬ cation in physics to our society on top of its various induced technology innovations. With a series of achievements along the road to uniï¬ cation in physics, we now end up with the two distinct entities, namely quantum ï¬
1601.00257#5
1601.00257#7
1601.00257
[ "1510.02804" ]
1601.00257#7
Modave Lectures on Applied AdS/CFT with Numerics
eld theory and general relativity. As we know, quantum ï¬ eld theory is a powerful framework for us to understand a huge range of phenomena in Nature such as high energy physics and condensed matter physics. Although the underlying philosophies are diï¬ erent, they share quantum ï¬ eld theory as their common language. In high energy physics, the philosophy is reductionism, where the goal is to ï¬ gure out the UV physics for our eï¬ ective low energy IR physics. The standard model for particle physics is believed to be an eï¬ ective low energy theory. To see what really happens at UV, we are required to go beyond the standard model by reaching a higher energy scale. This is the reason why we built LHC in Geneva. This is also the reason why we plan to go to the Great Collider from the Great Wall in China.
1601.00257#6
1601.00257#8
1601.00257
[ "1510.02804" ]
1601.00257#8
Modave Lectures on Applied AdS/CFT with Numerics
While in condensed matter physics, the philosophy is emergence. Actually we have a theory of everything for condensed matter physics, namely QED, or the Schrodinger equation for electrons with Coulomb interaction â 2 â J (u=7) North South Pole Pole (y=9) (y=) J (u=0) Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion.
1601.00257#7
1601.00257#9
1601.00257
[ "1510.02804" ]
1601.00257#9
Modave Lectures on Applied AdS/CFT with Numerics
among them. What condensed matter physicists are concerned with is how to engineer various low temperature IR ï¬ xed points, namely various phases from such a known UV theory. Such a variety of phases gives rise to a man-made multiverse, which is actually resonant to the landscape suggested by string theory. On the other hand, general relativity tells us that gravity is geometry. Gravity is dif- ferent, so subtle is gravity. The very longstanding issue in fundamental physics is trying to reconcile general relativity with quantum field theory. People like to give a name to it, called Quantum Gravity although we have not fully succeeded along this lane.
1601.00257#8
1601.00257#10
1601.00257
[ "1510.02804" ]
1601.00257#10
Modave Lectures on Applied AdS/CFT with Numerics
Here is a poor manâ s perspective into the current status of quantum gravity, depending on the asymptotic 1. The reason is twofold. First, due to the existence of Planck scale geometry of spacetime lp = (yar, spacetime is doomed such that one can not define local field operators in a d+1 dimensional gravitational theory. Instead, the observables can live only on the boundary of spacetime. Second, it is the dependence on the asymptopia that embodies the background independence of quantum gravity. # 2.1 De Sitter space: Meta-observables If the spacetime is asymptotically de Sitter as
1601.00257#9
1601.00257#11
1601.00257
[ "1510.02804" ]
1601.00257#11
Modave Lectures on Applied AdS/CFT with Numerics
ds2 = â dt2 + l2 cosh2 t l dâ ¦2 d, (2.1) when t â ±â , then by the coordinate transformation u = 2 tanâ 1 e # t l , the metric becomes ds2 = l2 sin2 u (du2 + dÏ 2 + sin2 Ï dâ ¦2 dâ 1) (2.2) 1This is a poor manâ s perspective because we shall try our best not to touch upon string theory although it is evident that this perspective is well shaped by string theory in a direct or indirect way throughout these lecture notes. â 3 â with Ï
1601.00257#10
1601.00257#12
1601.00257
[ "1510.02804" ]
1601.00257#12
Modave Lectures on Applied AdS/CFT with Numerics
the polar angle for the d-sphere. We plot the Penrose diagram in Figure 1 for de Sitter space. Whence both the past and future conformal inï¬ nity I â are spacelike. As a result, any observer can only detect and inï¬ uence portion of the whole spacetime. Moreover, any point in I + is causally connected by a null geodesic to its antipodal point in I â for de Sitter. In view of this, Witten has proposed the meta-observables for quantum gravity in de Sitter space, namely
1601.00257#11
1601.00257#13
1601.00257
[ "1510.02804" ]
1601.00257#13
Modave Lectures on Applied AdS/CFT with Numerics
[" cit) = [" Dge (2.3) Ii Ii with gf and g; a set of data specified on .% ~ respectively. Then one can construct the Hilbert space H; at %~ for quantum gravity in de Sitter space with the inner product (j,i) = (Q¥|#) by CPT transformation ©. The Hilbert space Hy at %+ can be constructed in a similar fashion. At the perturbative level, the dimension of Hilbert space for quantum gravity in de Sitter is infinite, which is evident from the past-future singularity of the meta-correlation functions at those points connected by the aforementioned geodesics. But it is suspected that the non-perturbative dimension of Hilbert space is supposed to be finite.
1601.00257#12
1601.00257#14
1601.00257
[ "1510.02804" ]
1601.00257#14
Modave Lectures on Applied AdS/CFT with Numerics
This is all one can say with such mata-observables[1]. However, there are also diï¬ erent but more optimistic perspectives. Among others, in- spired by AdS/CFT, Strominger has proposed DS/CFT correspondence. First, with I + identiï¬ ed as I â by the above null geodesics, the dual CFT lives only on one sphere rather than two spheres. Second, instead of working with the global de Sitter space, DS/CFT cor- respondence can be naturally formulated in the causal past of any given observer, where the bulk spacetime is the planar de Sitter and the dual CFT lives on I â
1601.00257#13
1601.00257#15
1601.00257
[ "1510.02804" ]
1601.00257#15
Modave Lectures on Applied AdS/CFT with Numerics
. For details, the readers are referred to Stromingerâ s original paper as well as his Les Houches lectures[2, 3]. # 2.2 Minkowski space: S-Matrix program The situation is much better if the spacetime is asymptotically ï¬ at. As the Penrose diagram for Minkowski space shows in Figure 2, the conformal inï¬ nity is lightlike. In this case, the only observable is scattering amplitude, abbreviated as S-Matrix, which connects the out states at I + to the in states at I â
1601.00257#14
1601.00257#16
1601.00257
[ "1510.02804" ]
1601.00257#16
Modave Lectures on Applied AdS/CFT with Numerics
2. One can claim to have a well deï¬ ned quantum gravity in asymptotically ï¬ at space once a sensible recipe is made for the computation of S-Matrix with gravitons. Actually, inspired by BCFW recursion relation[4], there has been much progress achieved over the last few years along this direction by the so called S-Matrix program, in which the scattering amplitude is constructed without the local Lagrangian, resonant to the non-locality of quantum gravity[5]. Traditionally, S-Matrix is computed by the Feynman diagram techniques, where the Feynman rules come from the local Lagrangian. But the computation becomes more and more complicated when the scattering process involves either more external legs or higher loops. While in the S-Matrix program the recipe for the 2Here we are concerned with the scattering amplitude for massless particles, including gravitons, since they are believed to be more fundamental than massive particles. But nevertheless by taking into account the data at i±, the scattering amplitude with massive particles involved can still be constructed in principle as it should be the case.
1601.00257#15
1601.00257#17
1601.00257
[ "1510.02804" ]
1601.00257#17
Modave Lectures on Applied AdS/CFT with Numerics
â 4 â Figure 2: The Penrose diagram for Minkowski space, where massless particles will always emanate from I â and end at I +. Figure 3: The Penrose diagram for the global anti-de Sitter space, where the conformal inï¬ nity I itself can be a spacetime on which the dynamics can live. computation of scattering amplitude, made out of the universal properties of S-Matrix, such as Poincare or BMS symmetry, unitarity and analyticity of S-Matrix, turns out to be far more eï¬ cient. It is expected that such an ongoing S-Matrix program will lead us eventually towards a well formulated quantum gravity in asymptotically ï¬ at space.
1601.00257#16
1601.00257#18
1601.00257
[ "1510.02804" ]
1601.00257#18
Modave Lectures on Applied AdS/CFT with Numerics
â 5 â # 2.3 Anti-de Sitter space: AdS/CFT correspondence The best situation is for the spacetime which is asymptotically anti-de Sitter as ds2 = l2 cos2 Ï (â dt2 + dÏ 2 + sin2 Ï dâ ¦2 dâ 1) (2.4) with Ï â [0, Ï 2 ). As seen from the Penrose diagram for anti-de Sitter space in Figure 3, the conformal inï¬ nity I is timelike in this case, where we can have a well formulated quantum theory for gravity by AdS/CFT correspondence[6, 7, 8]. Namely the quantum gravity in the bulk AdSd+1 can be holographically formulated in terms of CFTd on the boundary without gravity and vice versa. We shall elaborate on AdS/CFT in the subsequent section. Here we would like to mention one very interesting feature about AdS/CFT, that is to say, generically we have no local Lagrangian for the dual CFT, which echoes the aforementioned S-Matrix program somehow.
1601.00257#17
1601.00257#19
1601.00257
[ "1510.02804" ]
1601.00257#19
Modave Lectures on Applied AdS/CFT with Numerics
# 3. Applied AdS/CFT # 3.1 What AdS/CFT is To be a little bit more precise about what AdS/CFT is, let us ï¬ rst recall the very basic object in quantum ï¬ eld theory, namely the generating functional, which is deï¬ ned as Za\J] _ ini [ DweisalÂ¥l+s da JO). (3.1) Whence one can obtain the n-point correlation function for the operator O by taking the n-th functional derivative of the generating functional with respect to the source J. For example, \ bLa (0(e)) = Se, (3.2) 2 5O(x (O(01)O(a2)) = â 224 = 90(@) (3.3) bJ(a)dS (a2) dS (a2) As we know, we can obtain such a generating functional by perturbative expansion using the Feynman diagram techniques for weakling coupled quantum ï¬ eld theory, but obviously such a perturbation method breaks down when the involved quantum ï¬ eld theory is strongly coupled except one can ï¬ nd its weak dual. AdS/CFT provides us with such a dual for strongly coupled quantum ï¬ eld theory by a classical gravitational theory with one extra dimension. So now let us turn to general relativity, where the basic object is the action given by
1601.00257#18
1601.00257#20
1601.00257
[ "1510.02804" ]
1601.00257#20
Modave Lectures on Applied AdS/CFT with Numerics
Sd+1 = 1 16Ï G dd+1x â â g(R + d(d â 1) l2 + Lmatter) (3.4) for AdS gravity. Here for the present illustration and later usage, we would like to choose the Lagrangian for the matter ï¬ elds as Lmatter = l2 Q2 (â 1 4 F abFab â |DΦ|2 â m2|Φ|2) (3.5) â 6 â with F = dA, D = â â iA and Q the charge of complex scalar ï¬
1601.00257#19
1601.00257#21
1601.00257
[ "1510.02804" ]
1601.00257#21
Modave Lectures on Applied AdS/CFT with Numerics
eld. The variation of action gives rise to the equations of motion as follows Gab â d(d â 1) 2l2 gab = l2 Q2 [FacFb c + 2DaΦDbΦ â ( 1 4 FcdF cd + |DΦ|2 + m2|Φ|2)gab], (3.6) (3.7) â aF ab = i(ΦDbΦ â ΦDbΦ), DaDaΦ â m2Φ = 0. (3.8) Note that the equations of motion are generically second order PDEs. So to extrapolate the bulk solution from the AdS boundary, one is required to specify a pair of boundary conditions for each bulk ï¬ eld at the conformal boundary of AdS, which can be read oï¬ from the asymptotical behavior for the bulk ï¬ elds near the AdS boundary ds2 â l2 z2 [dz2 + (γµν + tµνzd)dxµdxν], (3.9) (3.10) # Aµ â aµ + bµzdâ 2, Φ â Ï â zâ â + Ï +zâ + (3.11) # a with â ± = d 4 + m2l23. Namely (γµν, tµν) are the boundary data for the bulk metric ï¬ eld, (aµ, bµ) for the bulk gauge ï¬ eld, and (Ï â , Ï +) for the bulk scalar ï¬
1601.00257#20
1601.00257#22
1601.00257
[ "1510.02804" ]
1601.00257#22
Modave Lectures on Applied AdS/CFT with Numerics
eld. But such pairs usually lead to singular solutions deep into the bulk. To avoid these singular solutions, one can instead specify the only one boundary condition from each pair such as (γµν, aµ, Ï â ). We denote these boundary data by J, whose justiï¬ cation will be obvious later on. At the same time we also require the regularity of the desired solution in the bulk. In this sense, the regular solution is uniquely determined by the boundary data J. Thus the on-shell action from the regular solution will be a functional of J. What AdS/CFT tells us is that this on-shell action in the bulk can be identiï¬ ed as the generating functional for strongly coupled quantum ï¬ eld theory living on the boundary, i.e., Zd[J] = Sd+1[J], (3.12) where apparently J has a dual meaning, not only serving as the source for the boundary quantum ï¬ eld theory but also being the boundary data for the bulk ï¬ elds. In particular, γµν sources the operator for the boundary energy momentum tensor whose expectation value is given by (3.3) as tµν, aµ sources a global U (1) conserved current operator whose expectation value is given as bµ, and the expectation value for the operator dual to the source Ï
1601.00257#21
1601.00257#23
1601.00257
[ "1510.02804" ]
1601.00257#23
Modave Lectures on Applied AdS/CFT with Numerics
â is given as Ï + up to a possible proportional coeï¬ cient. The conformal dimension for these dual operators can be read oï¬ from (3.9) by making the scaling transformation (z, xµ) â (αz, αxµ) as d, d â 1, and â + individually. 3Here we are working with the axial gauge for the bulk metric and gauge ï¬ elds, which can always been achieved. In addition, although the mass square is allowed to be negative in the AdS it can not be below the BF bound â
1601.00257#22
1601.00257#24
1601.00257
[ "1510.02804" ]
1601.00257#24
Modave Lectures on Applied AdS/CFT with Numerics
d2 â 7 â Here is a caveat on the validity of (3.12). Although such a boundary/bulk duality is believed to hold in more general circumstances, (3.12) works for the large N strongly coupled quantum ï¬ eld theory on the boundary where N and the coupling parameter of the dual , respectively. quantum ï¬ eld theory are generically proportional to some powers of In order to capture the 1 N correction to the dual quantum ï¬ eld theory by holography, one is required to calculate the one-loop partition function on top of the classical background solution in the bulk.
1601.00257#23
1601.00257#25
1601.00257
[ "1510.02804" ]
1601.00257#25
Modave Lectures on Applied AdS/CFT with Numerics
On the other hand, to see the ï¬ nite coupling eï¬ ect in the dual quantum ï¬ eld theory by holography, one is required to work with higher derivative gravity theory in the bulk. But in what follows, for simplicity we shall work exclusively with (3.12) in its applicability regime. Among others, we would like to conclude this subsection with the three important im- plications of AdS/CFT. First, a ï¬ nite temperature quantum ï¬ eld theory at ï¬ nite chemical potential is dual to a charged black hole in the bulk. Second, the entanglement entropy of the dual quantum ï¬ eld theory can be calculated by holography as the the area of the bulk minimal surface anchored onto the entangling surface[11, 12, 13]. Third, the extra bulk di- mension represents the renormalization group ï¬ ow direction for the boundary quantum ï¬ eld theory with AdS boundary as UV, although the renormalization scheme is supposed to be diï¬ erent from the conventional one implemented in quantum ï¬ eld theory4. # 3.2 Why AdS/CFT is reliable But why AdS/CFT is reliable? In fact, besides its explicit implementations in string theory such as the duality between Type IIB string theory in AdS5 à S5 and N = 4 SYM theory on the four dimensional boundary, where some results can be computed on both sides and turn out to match each other, there exist many hints from the inside of general relativity indicating that gravity is holographic.
1601.00257#24
1601.00257#26
1601.00257
[ "1510.02804" ]
1601.00257#26
Modave Lectures on Applied AdS/CFT with Numerics
Here we simply list some of them as follows. â ¢ Bekenstein-Hawkingâ s black hole entropy formula SBH = A 4ldâ 1 p [14]. â ¢ Brown-Henneauxâ s asymptotic symmetry analysis for three dimensional gravity[15], 2G successfully reproduces the black hole entropy where the derived central charge 3l by the Cardy formula for conformal ï¬ eld theory[16]. â ¢ Brown-Yorkâ s surface tensor formulation of quasi local energy and conserved charges[17]. Once we are brave enough to declare that this surface tensor be not only for the purpose of the bulk gravity but also for a certain system living on the boundary, we shall end up with the long wave limit of AdS/CFT, namely the gravity/ï¬
1601.00257#25
1601.00257#27
1601.00257
[ "1510.02804" ]
1601.00257#27
Modave Lectures on Applied AdS/CFT with Numerics
uid correspondence, which has been well tested[18]. On the other hand, we can also see how such an extra bulk dimension emerges from quantum ï¬ eld theory perspective. In particular, inspired by Swingleâ s seminal work on the connection between the MERA tensor network state for quantum critical systems and AdS 4This implication is sometimes dubbed as RG = GR. â 8 â space[19], Qi has recently proposed an exact holographic mapping to generate the bulk Hilbert space of the same dimension from the boundary Hilbert space[20], which echoes the afore- mentioned renormalization group ï¬ ow implication of AdS/CFT. Keeping all of these in mind, we shall take AdS/CFT as a ï¬ rst principle and explore its various applications in what follows. # 3.3 How useful AdS/CFT is As alluded to above, AdS/CFT is naturally suited for us to address strongly coupled dy- namics and non-equilibrium processes by mapping the involved hard quantum many body problems to classical few body problems.
1601.00257#26
1601.00257#28
1601.00257
[ "1510.02804" ]
1601.00257#28
Modave Lectures on Applied AdS/CFT with Numerics
There are two approaches towards the construction of holographic models. One is called the top-down approach, where the microscopic content of the dual boundary theory is generically known because the construction originates in string theory. The other is called the bottom-up approach, which can be regarded as kind of eï¬ ective ï¬ eld theory with one extra dimension for the dual boundary theory. By either approach, we can apply AdS/CFT to QCD as well as the QCD underlying quark-gluon plasma, ending up with AdS/QCD[21, 22]. On the other hand, taking into account that there are a bunch of strongly coupled systems in condensed matter physics such as high Tc superconductor, liquid Helium, and non-Fermi liquid, we can also apply AdS/CFT to condensed matter physics, ending up with AdS/CMT[23, 24, 25, 26, 27]. Note that the bulk dynamics boils eventually down to a set of diï¬ erential equations, whose solutions are generically not amenable to an analytic treatment. So one of the central tasks in applied AdS/CFT is to ï¬ nd the numerical solutions to diï¬ erential equations. In the next section, we shall provide a basic introduction to the main numerical methods for solving diï¬ erential equations in applied AdS/CFT.
1601.00257#27
1601.00257#29
1601.00257
[ "1510.02804" ]
1601.00257#29
Modave Lectures on Applied AdS/CFT with Numerics
# 4. Numerics for Solving Diï¬ erential Equations Roughly speaking, there are three numerical schemes to solve diï¬ erential equations by trans- forming them into algebraic equations, namely ï¬ nite diï¬ erent method, ï¬ nite element method, and spectral method. According to our experience with the numerics in applied AdS/CFT, it is favorable to make a code from scratch for each problem you are faced up with. In particular, the variant of spectral method, namely pseudo-spectral method turns out to be most eï¬ cient in solving diï¬ erential equations along the space direction where Newton-Raphson iteration method is extensively employed if the resultant algebraic equations are non-linear. On the other hand, ï¬ nite diï¬
1601.00257#28
1601.00257#30
1601.00257
[ "1510.02804" ]
1601.00257#30
Modave Lectures on Applied AdS/CFT with Numerics
erence method such as Runge-Kutta method is usually used to deal with the dynamical evolution along the time direction. So now we like to elaborate a little bit on Newton-Raphson method, pseudo-spectral method, as well as Runge-Kutta method one by one. â 9 â f(x) Figure 4: Newton-Raphson iteration map is used to ï¬ nd the rightmost root for a non-linear algebraic equation. # 4.1 Newton-Raphson method
1601.00257#29
1601.00257#31
1601.00257
[ "1510.02804" ]
1601.00257#31
Modave Lectures on Applied AdS/CFT with Numerics
To ï¬ nd the desired root for a given non-linear function f (x), we can start with a wisely guessed initial point xk. Then as shown in Figure 4 by Newton-Raphson iteration map, we hit the next point xk+1 as trp =p â f' (we) f(r), (4.1) which is supposed to be closer to the desired root. By a ï¬ nite number of iterations, we eventually end up with a good approximation to the desired root. If we are required to ï¬ nd the root for a group of non-linear functions F (X), then the iteration map is given by Xk+1 = Xk â [( â F â X )â 1F ]|Xk , (4.2) where the formidable Jacobian can be tamed by Taylor expansion trick since the expansion coeï¬ cient of the linear term is simply the Jacobian in Taylor expansion F (X) = F (X0) + â F â X |X0(X â X0) + · · ·.
1601.00257#30
1601.00257#32
1601.00257
[ "1510.02804" ]
1601.00257#32
Modave Lectures on Applied AdS/CFT with Numerics
# 4.2 Pseudo-spectral method As we know, we can expand an analytic function in terms of a set of appropriate spectral functions as f(a) = > CnTy (x) (4.3) â 10 â with N some truncation number, depending on the numerical accuracy you want to achieve. Then the derivative of this function is given by N f(x) = > CnT" (2). (4.4) n=1 Whence the derivatives at the collocation points can be obtained from the values of this function at these points by the following diï¬ erential matrix as f'(@i) = 35 Dis f(x), (4.5) J where the matrix D = Tâ T~! with Tj, = T,(a) and T/, = T/(x;). With this differential matrix, the differential equation in consideration can be massaged into a group of algebraic equations for us to solve the unknown f(x;) by requiring that both the equation hold at the collocation points and the prescribed boundary conditions be satisfied. This is the underlying idea for pseudo-spectral method. Among others, we would like to point out the two very advantages of pseudo-spectral method, compared to ï¬
1601.00257#31
1601.00257#33
1601.00257
[ "1510.02804" ]
1601.00257#33
Modave Lectures on Applied AdS/CFT with Numerics
nite diï¬ erence method and ï¬ nite element method. First, one can ï¬ nd the interpolating function for f (x) by the built-in procedure as follows f (x) = Tn(x)T â 1 ni f (xi). n,i (4.6) Second, the numerical error decays exponentially with the truncation number N rather than the power law decay followed by the other two methods. # 4.3 Runge-Kutta method
1601.00257#32
1601.00257#34
1601.00257
[ "1510.02804" ]
1601.00257#34
Modave Lectures on Applied AdS/CFT with Numerics
As mentioned before, we should employ ï¬ nite diï¬ erence method to march along the time direction. But before that, we are required to massage the involved diï¬ erential equation into the following ordinary diï¬ erential equation Ë y = f (y, t), (4.7) which is actually the key step for one to investigate the temporal evolution in applied AdS/CFT. Once this non-trivial step is achieved, then there are a bunch of ï¬
1601.00257#33
1601.00257#35
1601.00257
[ "1510.02804" ]
1601.00257#35
Modave Lectures on Applied AdS/CFT with Numerics
nite diï¬ er- ence schemes available for one to move forward. Among others, here we simply present the classical fourth order Runge-Kutta method as follows k1 = f (yi, ti), â t 2 â t 2 k4 = f (yi + â tk3, ti + â t), â t 2 â t 2 k2 = f (yi + k1, ti + k3 = f (yi + k2, ti + ), ), ti+1 = ti + â t, yi+1 = yi + â t 6 (k1 + 2k2 + 2k3 + k4), (4.8) â 11 â
1601.00257#34
1601.00257#36
1601.00257
[ "1510.02804" ]
1601.00257#36
Modave Lectures on Applied AdS/CFT with Numerics
because it is user friendly and applicable to all the temporal evolution problems we have been considered so far[28, 29, 30, 31, 32, 33]5. # 5. Holographic Superï¬ uid at Zero Temperature In this section, we would like to take the zero temperature holographic superï¬ uid as an concrete example to demonstrate how to apply AdS/CFT with numerics. In due course, not only shall we introduce some relevant concepts, but also present some new results[34]. The action for the simplest model of holographic superï¬ uid is just given by (3.4). To make our life easier, we shall work with the probe limit, namely the back reaction of matter ï¬ elds onto the metric is neglected, which can be achieved by taking the large Q limit. Thus we can put the matter ï¬ elds on top of the background which is the solution to the vacuum Einstein equation with a negative cosmological constant Î = â d(dâ 1) .
1601.00257#35
1601.00257#37
1601.00257
[ "1510.02804" ]
1601.00257#37
Modave Lectures on Applied AdS/CFT with Numerics
For simplicity, we shall focus only on the zero temperature holographic superï¬ uid, which can be implemented by choosing the AdS soliton as the bulk geometry[35], i.e., ds2 = l2 z2 [â dt2 + dx2 + dz2 f (z) + f (z)dθ2]. (5.1) Here f (z) = 1 â ( z )d with z = z0 the tip where our geometry caps oï¬ and z = 0 the AdS z0 boundary. To guarantee the smooth geometry at the tip, we are required to impose the periodicity 4Ï z0 onto the θ coordinate. The inverse of this periodicity set by z0 is usually 3 interpreted as the conï¬ ning scale for the dual boundary theory. In what follows, we will take the units in which l = 1, 16Ï GQ2 = 1, and z0 = 1. In addition, we shall focus exclusively on the action of matter ï¬ elds because the leading Q0 contribution has been frozen by the above ï¬
1601.00257#36
1601.00257#38
1601.00257
[ "1510.02804" ]
1601.00257#38
Modave Lectures on Applied AdS/CFT with Numerics
xed background geometry. # 5.1 Variation of action, Boundary terms, and Choice of ensemble The variational principle gives rise to the equations of motion if and only if the boundary terms vanish in the variation of action. For our model, the variation of action is given by 5S = / d*leJâ G|V.F® + i(®D°S â BD'S)|5 A, â / dleVâ hing ES Ay + (f attey g(DaD* â m2) &5® pes hngD*8d5®) + C.C)]. (5.2) To make the boundary terms vanish, we can fix A, and ® on the boundary. Fixing A, amounts to saying that we are working with the grand canonical ensemble. In order to work with the canonical ensemble where /â hn,F® is fixed instead, we are required to add the additional boundary term J Ba /â hn F Ay to the action, which is essentially the Legendre transformation. On the other hand, fixing ¢_ gives rise to the standard quantization.
1601.00257#37
1601.00257#39
1601.00257
[ "1510.02804" ]
1601.00257#39
Modave Lectures on Applied AdS/CFT with Numerics
We can 5It is worthwhile to keep in mind that the accumulated numerical error is of order O(â t4) for this classical Runge-Kutta method. â 12 â also have an alternative quantization by ï¬ xing Ï + when â d2 4 + 1[37]. In what follows, we shall restrict our attention onto the grand canonical ensemble and the standard quantization for the case of d = 3 and m2 = â 2, whereby â â = 1 and â + = 2. # 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization What we care about is the on-shell action, which can be shown to have IR divergence gener- ically in the bulk by the asymptotic expansion near the AdS boundary, corresponding to the UV divergence for the dual boundary theory. The procedure to make the on-shell action ï¬ nite by adding some appropriate counter terms is called holographic renormalization[38]. For our case, the on-shell action is given by
1601.00257#38
1601.00257#40
1601.00257
[ "1510.02804" ]