doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.02167 | 3 | 1http://visualqa.csail.mit.edu 2https://github.com/metalbubble/VQAbaseline
1
Image feature â~~_ â ©] cafeteria:0.01 yes:0.81 no:0.15 are these people family? â» |O| are ââ~ i | people:0.02 Softmax One-hot vector
Figure 1: Framework of the iBOWIMG. Features from the question sentence and image are con- catenated then feed into softmax to predict the answer.
In this work, we carefully implement the BOWIMG baseline model. We call it iBOWIMG to avoid confusion with the implementation in [2]. With proper setup and training, this simple baseline model shows comparable performance to many recent recurrent network-based approaches for visual QA. Further analysis shows that the baseline learns to correlate the informative words in the question sentence and visual concepts in the image with the answer. Furthermore, such correlations can be used to compute reasonable spatial attention map with the help of the CAM technique proposed in [20]. The source code and the visual QA demo based on the trained model are publicly available. In the demo, iBOWIMG baseline gives answers to any question relevant to the given images. Playing with the visual QA models interactively could reveal the strengths and weakness of the trained model.
# iBOWIMG for Visual Question Answering | 1512.02167#3 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 4 | # iBOWIMG for Visual Question Answering
In most of the recent proposed models, visual QA is simpliï¬ed to a classiï¬cation task: the number of the different answers in the training set is the number of the ï¬nal classes the models need to learn to predict. The general pipeline of those models is that the word feature extracted from the question sentence is concatenated with the visual feature extracted from the image, then they are fed into a softmax layer to predict the answer class. The visual feature is usually taken from the top of the VGG network or GoogLeNet, while the word features of the question sentence are usually the popular LSTM-based features [12, 2]. | 1512.02167#4 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 5 | In our iBOWIMG model, we simply use naive bag-of-words as the text feature, and use the deep fea- tures from GoogLeNet [14] as the visual features. Figure 1 shows the framework of the iBOWIMG model, which can be implemented in Torch with no more than 10 lines of code. The input question is ï¬rst converted to a one-hot vector, which is transformed to word feature via a word embedding layer and then is concatenated with the image feature from CNN. The combined feature is sent to the softmax layer to predict the answer class, which essentially is a multi-class logistic regression model.
# 3 Experiments
Here we train and evaluate the iBOWIMG model on the Full release of COCO VQA dataset [2], the largest VQA dataset so far. In the COCO VQA dataset, there are 3 questions annotated by Amazon Mechanical Turk (AMT) workers for each image in the COCO dataset. For each question, 10 answers are annotated by another batch of AMT workers. To pre-process the annotation for training, we perform majority voting on the 10 ground-truth answers to get the most certain answer
2
# Table 1: Performance comparison on test-dev. | 1512.02167#5 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 6 | 2
# Table 1: Performance comparison on test-dev.
IMG [2] BOW [2] BOWIMG [2] LSTMIMG [2] CompMem [6] NMN+LSTM [1] WR Sel. [13] ACK [16] DPPnet [11] iBOWIMG Overall 28.13 48.09 52.64 53.74 52.62 54.80 - 55.72 57.22 55.72 Open-Ended yes/no 64.01 75.66 75.55 78.94 78.33 77.70 - 79.23 80.71 76.55 number 00.42 36.70 33.67 35.24 35.93 37.20 - 36.13 37.24 35.03 others 03.77 27.14 37.37 36.42 34.46 39.30 - 40.08 41.69 42.62 Overall 30.53 53.68 58.97 57.17 - - 60.96 - 62.48 61.68 Multiple-Choice yes/no 69.87 75.71 75.59 78.95 - - - - 80.79 76.68 number 00.45 37.05 34.35 35.80 - - - - 38.94 37.05 others 03.76 38.64 50.33 43.41 - - - - 52.16 54.44 | 1512.02167#6 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 8 | To generate the training set and validation set for our model, we ï¬rst randomly split the images of COCO val2014 into 70% subset A and 30% subset B. To avoid potential overï¬tting, questions shar- ing the same image will be placed into the same split. The question-answer pairs from the images of COCO train2014 + val2014 subset A are combined and used for training, while the val2014 subset B is used as validation set for parameter tuning. After we ï¬nd the best model parameters, we combine the whole train2014 and val2014 to train the ï¬nal model. We submit the prediction result given by the ï¬nal model on the testing set (COCO test2015) to the evaluation server, to get the ï¬nal accuracy on the test-dev and test-standard set. For Open-Ended Question track, we take the top-1 predicted answer from the softmax output. For the Multiple-Choice Question track, we ï¬rst get the softmax probability for each of the given choices then select the most conï¬dent one.
The code is implemented in Torch. The training takes about 10 hours on a single GPU NVIDIA Titan Black.
# 3.1 Benchmark Performance | 1512.02167#8 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 9 | The code is implemented in Torch. The training takes about 10 hours on a single GPU NVIDIA Titan Black.
# 3.1 Benchmark Performance
According to the evaluation standard of the VQA dataset, the result of the any proposed VQA models should report accuracy on the test-standard set for fair comparison. We report our baseline on the test-dev set in Table 1 and the test-standard set in Table 2. The test-dev set is used for debugging and validation experiments and allows for unlimited submission to the evaluation server, while test- standard is used for model comparison with limited submission times.
Since this VQA dataset is rather new, the publicly available models evaluated on the dataset are all from non-peer reviewed arXiv papers. We include the performance of the models available at the time of writing (Dec.5, 2015) [2, 6, 1, 13, 16, 11]. Note that some models are evaluated on either test-dev or test-standard for either Open-Ended or Multiple-Choice track. | 1512.02167#9 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 10 | The full set of the VQA dataset was released on Oct.6 2015; previously the v0.1 version and v0.9 version had been released. We notice that some models are evaluated using non-standard setups, rendering performance comparisons difï¬cult. [17] (arXiv dated at Nov.17 2015) used v0.9 version of VQA with their own split of training and testing; [18] (arXiv dated at Nov.7 2015) used their own split of training and testing for the val2014; [3] (arXiv dated at Nov.18 2015) used v0.9 version of VQA dataset. So these are not included in the comparison.
Except for these IMG, BOW, BOWIMG baselines provided in the [2], all the compared methods use either deep or recursive neural networks. However, our iBOWIMG baseline shows comparable performances against these much more complex models, except for DPPnet [11] that is about 1.5% better.
3
# Table 2: Performance comparison on test-standard. | 1512.02167#10 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 11 | 3
# Table 2: Performance comparison on test-standard.
LSTMIMG [2] NMN+LSTM [1] ACK [16] DPPnet [11] iBOWIMG Overall 54.06 55.10 55.98 57.36 55.89 Open-Ended yes/no - - 79.05 80.28 76.76 number - - 36.10 36.92 34.98 others - - 40.61 42.24 42.62 Overall - - - 62.69 61.97 Multiple-Choice yes/no - - - 80.35 76.86 number - - - 38.79 37.30 others - - - 52.79 54.60
# 3.2 Training Details
Learning rate and weight clip. We ï¬nd that setting up a different learning rate and weight clipping for the word embedding layer and softmax layer leads to better performance. The learning rate for the word embedding layer should be much higher than the learning rate of softmax layer to learn a good word embedding. From the performance of BOW in Table 1, we can see that a good word model is crucial to the accuracy, as BOW model alone could achieve closely to 48%, even without looking at the image content. | 1512.02167#11 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 12 | Model parameters to tune. Though our model could be considered as the simplest baseline so far for visual QA, there are several model parameters to tune: 1) the number of epochs to train. 2) the learning rate and weight clip. 3) the threshold for removing less frequent question word and answer classes. We iterate to search the best value of each model parameter separately on the val2014 subset B. In our best model, there are 5,746 words in the dictionary of question sentence, 5,216 classes of answers. The speciï¬c model parameters can be found in the source code.
# 3.3 Understanding the Visual QA model
From the comparisons above, we can see that our baseline model performs as well as the recurrent neural network models on the VQA dataset. Furthermore, due to its simplicity, the behavior of the model could be easily interpreted, demonstrating what it learned for visual QA.
Essentially, the BOWIMG baseline model learns to memorize the correlation between the answer class and the informative words in the question sentence along with the visual feature. We split the learned weights of softmax into two parts, one part for the word feature and the other part for the visual feature. Therefore,
r = Mwxw + Mvxv. (1) | 1512.02167#12 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 13 | r = Mwxw + Mvxv. (1)
Here the softmax matrix M is decomposed into the weights Mw for word feature xw and the weights Mv for the visual feature xv whereas M = [Mw, Mv]. r is the response of the answer class before softmax normalization. Denote the response rw = Mwxw as the contribution from question words and rv = Mvxv as the contribution from the image contents. Thus for each predicted answer, we know exactly the proportions of contribution from word and image content respectively. We also could rank rw and rv to know what the predicted answer could be if the model only relies on one side of information. | 1512.02167#13 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 14 | Figure 2 shows some examples of the predictions, revealing that the question words usually have dominant inï¬uence on predicting the answer. For example, the correctly predicted answers for the two questions given for the ï¬rst image âwhat is the color of sofaâ and âwhich brand is the laptopâ come mostly from the question words, without the need for image. This demonstrates the bias in the frequency of object and actions appearing in the images of COCO dataset. For the second image, we ask âwhat are they doingâ: the words-only prediction gives âplaying wii (10.62), eating (9.97), playing frisbee (9.24)â, while full prediction gives the correct prediction âplaying baseball (10.67 = 2.01 [image] + 8.66 [word])â.
To further understand the answers predicted by the model given the visual feature and question sentence, we ï¬rst decompose the word contribution of the answer into single words of the ques- tion sentence, then we visualize the informative image regions relevant to the answer through the technique proposed in [19].
4 | 1512.02167#14 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 15 | Question: what is the color of the sofa Predictions: brown (Score: 12.89 = 1.01 [image] + 11.88 [word]) red (score: 11.92 = 1.13 [image] + 10.79 [word]) yellow (score: 11.91 = 1.08 [image] + 10.84 [word] Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: brown (11.88), gray (11.18), tan (11.16) Question: which brand is the laptop Predictions: apple (Score: 10.87 = 1.10 [image] + 9.77 [word]) dell (score: 9.83 = 0.71 [image] + 9.12 [word)) toshiba (score: 9.76 = 1.18 [image] + 8.58 [word]) Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: apple (9.77), hp (9.18), dell (9.12) Question: what are they doing Predictions: playing baseball (score: 10.67 = 2.01 [image] + 8.66 [word)) baseball (score: 9.65 = 4.84 [image] + 4.82 | 1512.02167#15 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 16 | (score: 10.67 = 2.01 [image] + 8.66 [word)) baseball (score: 9.65 = 4.84 [image] + 4.82 [word]) grazing (score: 9.34 = 0.53 [image] + 8.81 [word)) Based on word only: playing wii (10.62), eating (9.97), playing frisbee (9.24) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Question: how many people inside Predictions: 3 (score: 13.39 = 2.75 [image] + 10.65 [word]) 2 (score: 12.76 = 2.49 [image] + 10.27 [word]) 5 (score: 12.72 = 1.83 [image] + 10.89 [word]) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Based on word only: 8 (11.24), 7 (10.95), 5 (10.89) what gaming system are they playing s: (score: 19.35 = 0.64 [image] + 18.71 [word]) soccer (score: 13.23 = 0.34 [image] + 12.89 [word] mario kart (Score: | 1512.02167#16 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 17 | [image] + 18.71 [word]) soccer (score: 13.23 = 0.34 [image] + 12.89 [word] mario kart (Score: 13.17 = 0.11 [image] + 13.06 [word] Question: are they having fun Predictions: yes (score: 10.65 = 3.98 [image] + 6.68 [word] no (score: 8.06 = 3.33 [image] + 4.73 [word)]) library (score: 6.20 = 4.40 [image] + 1.80 [word)) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on word only: wii (18.71), mario kart (13.06), soccer (12.89) Based on word only: yes (6.68), no (4.73), fly kite (3.43) | 1512.02167#17 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 18 | Figure 2: Examples of visual question answering from the iBOWIMG baseline. For each image there are two questions and the top 3 predicted answers from the model. The prediction score of each answer is decomposed into the contributions of image and words respectively. The predicted answers which rely purely on question words or image are also shown.
Question: What are they doing? Prediction: texting (score: 12.02=3.78 [image] + 8.24 [word]) Word importance: doing(7.01) are(1.05) they(0.49) what(-0.3) What is he eating? n: hot dog (score: 13.01=5.02 [image] + 7.99 [word]) Word importance: eating(4.12) what(2.81) is(0.74) he(0.30) Question: Is there a cat? Prediction: yes (score: 11.48 = 4.35 [image] + 7.13 [word]) word importance: is(2.65) there(2.46) a(1.70) cat(0.30) ion: Where is the cat? ion: shelf (score: 10.81 = 3.23 [image] + 7.58 [word]) word importance: where(3.89) cat(1.88) the(1.79) is(0.01) | 1512.02167#18 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 19 | Figure 3: The examples of the word importance of question sentences and the informative image regions relevant to the predicted answers.
Since there are just two linear transformations (one is word embedding and the other is softmax matrix multiplication) from the one hot vector to the answer response, we could easily know the importance of each single word in the question to the predicted answer. In Figure 3, we plot the ranked word importance for each word in the question sentence. In the ï¬rst image question word âdoingâ is informative to the answer âtextingâ while in the second image question word âeatingâ is informative to the answer âhot dogâ.
To highlight the informative image regions relevant to the predicted answer we apply a technique called Class Activation Mapping (CAM) proposed in [19]. The CAM technique leverages the linear relation between the softmax prediction and the ï¬nal convolutional feature map, which allows us to identify the most discriminative image regions relevant to the predicted result. In Figure 3 we plot the heatmaps generated by the CAM associated with the predicted answer, which highlight the
5 | 1512.02167#19 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 20 | 5
Predictions: flying kites (score: 12.86 = 1.64 [image] + 11.22 [word] + playing baseball (score: 12.38 = 3.18 [image] + 9.20 {word) playing frisbee (score: 11.96 = 1.72 [image] + 10.24 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: playing wii (11.49), flying kites (11.22), playing frisbee (10.24), Question: where is the place Predictions: + field (score: 10.63 = 3.05 [image] + 7.58 [word]) + park (score: 9.69 = 2.96 [image] + 6.73 [word]) + in air (score: 9.67 = 2.27 [image] + 7.40 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: above stove (8.23), behind clouds (8.08), on floor (8.03),
Figure 4: Snapshot of the visual question answering demo. People could type questions into the demo and the demo will give answer predictions. Here we show the answer predictions for two questions. | 1512.02167#20 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 21 | Figure 4: Snapshot of the visual question answering demo. People could type questions into the demo and the demo will give answer predictions. Here we show the answer predictions for two questions.
informative image regions such as the cellphone in the ï¬rst image to the answer âtextingâ and the hot dog in the ï¬rst image to the answer âhot dogâ. The example in lower part of Figure 3 shows the heatmaps generated by two different questions and answers. Visual features from CNN already have implicit attention and selectivity over the image region, thus the resulting class activation maps are similar to the maps generated by the attention mechanisms of the VQA models in [13, 17, 18].
# Interactive Visual QA Demo
Question answering is essentially an interactive activity, thus it would be good to make the trained models able to interact with people in real time. Aided by the simplicity of the baseline model, we built a web demo that people could type question about a given image and our AI system powered by iBOWIMG will reply the most possible answers. Here the deep feature of the images are extracted beforehand. Figure 4 shows a snapshot of the demo. People could play with the demo to see the strength and weakness of VQA model.
# 5 Concluding Remarks | 1512.02167#21 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 22 | # 5 Concluding Remarks
For visual question answering on COCO dataset, our implementation of a simple baseline achieves comparable performance to several recently proposed recurrent neural network-based approaches. To reach the correct prediction, the baseline captures the correlation between the informative words in the question and the answer, and that between image contents and the answer. How to move beyond this, from memorizing the correlations to actual reasoning and understanding of the question and image, is a goal for future research.
# References
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Deep compositional question answering with neural module networks. arXiv preprint arXiv:1511.02799, 2015.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015.
[3] K. Chen, J. Wang, L.-C. Chen, H. Gao, W. Xu, and R. Nevatia. Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960, 2015. | 1512.02167#22 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 23 | [4] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015.
6
[5] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for multilingual image question answering. arXiv preprint arXiv:1505.05612, 2015.
[6] A. Jiang, F. Wang, F. Porikli, and Y. Li. Compositional memory for visual question answering. arXiv preprint arXiv:1511.05676, 2015.
[7] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal neural language models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 595â603, 2014. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105, 2012. | 1512.02167#23 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 24 | [9] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. In Computer VisionâECCV 2014, pages 740â755. Springer, 2014.
[10] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014.
[11] H. Noh, P. H. Seo, and B. Han. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756, 2015.
[12] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In NIPS, volume 1, page 3, 2015.
[13] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus regions for visual question answer- ing. arXiv preprint arXiv:1511.07394, 2015. | 1512.02167#24 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.02167 | 25 | [14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [15] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. arXiv preprint arXiv:1411.4555, 2014.
[16] Q. Wu, P. Wang, C. Shen, A. v. d. Hengel, and A. Dick. Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973, 2015.
[17] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. arXiv preprint arXiv:1511.05234, 2015.
[18] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015. | 1512.02167#25 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | [
{
"id": "1511.05234"
},
{
"id": "1505.05612"
},
{
"id": "1511.07394"
},
{
"id": "1511.05960"
},
{
"id": "1511.05676"
},
{
"id": "1511.02799"
},
{
"id": "1511.02274"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.00468"
},
{
"id": "1512.04150"
},
{
"id": "1505.04467"
}
] |
1512.00965 | 1 | We proposed Neural Enquirer as a neural network architecture to execute a natu- ral language (NL) query on a knowledge-base (KB) for answers. Basically, Neural En- quirer ï¬nds the distributed representation of a query and then executes it on knowledge- base tables to obtain the answer as one of the values in the tables. Unlike similar eï¬orts in end-to-end training of semantic parsers [11, 9], Neural Enquirer is fully âneuralizedâ: it not only gives distributional representation of the query and the knowledge-base, but also realizes the execution of compositional queries as a series of diï¬erentiable operations, with intermediate results (consisting of annotations of the tables at diï¬erent levels) saved on multiple layers of memory. Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch. The training can be done in an end-to-end fashion, but it can take stronger guidance, e.g., the step-by-step supervision for complicated | 1512.00965#1 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 2 | The training can be done in an end-to-end fashion, but it can take stronger guidance, e.g., the step-by-step supervision for complicated queries, and beneï¬t from it. Neural Enquirer is one step towards building neural network systems which seek to understand language by executing it on real-world. Our experiments show that Neu- ral Enquirer can learn to execute fairly complicated NL queries on tables with rich structures. | 1512.00965#2 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 3 | # 1 Introduction
In models for natural language dialogue and question answering, there is ubiquitous need for querying a knowledge-base [13, 11]. The traditional pipeline is to put the query through a semantic parser to obtain some âexecutableâ representations, typically logical forms, and then apply this representation to a knowledge-base for the answer. Both the semantic parsing and the query execution part can get quite messy for complicated queries like ËQ: âWhich city hosted the longest game before the game in Beijing? â in Figure 1, and need carefully devised systems with hand-crafted features or rules to derive the correct logical form ËF (written in SQL-like style). Partially to overcome this diï¬culty, there has been eï¬ort [11] to âbackprop- agateâ the result of query execution to revise the semantic representation of the query, which actually falls into the thread of work on learning from grounding [5]. One drawback of these
âWork done when the ï¬rst author worked as an intern at Noahâs Ark Lab, Huawei Technologies.
1 | 1512.00965#3 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 4 | âWork done when the ï¬rst author worked as an intern at Noahâs Ark Lab, Huawei Technologies.
1
query embedding query Q Athens (probability distribution over entries) [Senses] Select host_city of r2 â Executor-4 Find r2 in R with max(# duration) Executor-3 Memory Layer-3 | Find row sets R where year < a Memory Layer-2 | Select year of ri as a Executor-1 Memory Layer-1 | Find row ri where host_city=Beijing table embedding Which city hosted the longest Olympic game before the game in Beijing? logical form F ESE EERE ee TEES | GEE | EEE | ee 2000 | sydney | _20 | 2000 CE | Ge | SE | 2004 | athens | 35 | 1.500 OOO | COO | Coo | eee 2008 | Beijing 30 2,500 O09 | ERED | Eee Eee 2012 | London 40 2,300 argmax(host_city, # duration) where year<(select year, where host_city=Beijing),
Figure 1: An overview of Neural Enquirer with ï¬ve executors
semantic parsing models is rather symbolic with rule-based features, leaving only a handful of tunable parameters to cater to the supervision signal from the execution result. | 1512.00965#4 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 5 | semantic parsing models is rather symbolic with rule-based features, leaving only a handful of tunable parameters to cater to the supervision signal from the execution result.
On the other hand, neural network-based models are previously successful mostly on tasks with direct and strong supervision in natural language processing or related domain, with examples including machine translation and syntactic parsing. The recent work on learning to execute simple Python code with LSTM [15] pioneers in the direction on learning to parse structured objects through executing it in a purely neural way, while the later work on Neural Turing Machine (NTM) [6] introduces more modeling ï¬exibility by equipping the LSTM with external memory and various means of interacting with it. | 1512.00965#5 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 6 | Our work, inspired by above-mentioned threads of research, aims to design a neural net- work system that can learn to understand the query and execute it on a knowledge-base table from examples of queries and answers. Our proposed Neural Enquirer encodes queries and KBs into distributed representations, and executes compositional queries against the KB It can be trained using Query-Answer pairs, through a series of diï¬erentiable operations. where the distributed representations of queries and the KB are optimized together with the query execution logic in an end-to-end fashion. We then demonstrates using a synthetic question-answering task that our proposed Neural Enquirer is capable of learning to exe- cute compositional natural language queries with complex structures.
# 2 Overview of Neural Enquirer
Given an NL query Q and a KB table T , Neural Enquirer executes the query against the table and outputs a ranked list of query answers. The execution is done by ï¬rst using Encoders to encode the query and table into distributed representations, which are then sent to a cascaded pipeline of Executors to derive the answer. Figure 1 gives an illustrative example (with ï¬ve executors) of various types of components involved:
2 | 1512.00965#6 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 7 | 2
Query Encoder (Section 3.1), which encodes the query into a distributed representation that carries the semantic information of the original query. The encoded query embedding will be sent to various executors to compute its execution result.
Table Encoder (Section 3.2), which encodes entries in the table into distributed vectors. Table Encoder outputs an embedding vector for each table entry, which retains the two- dimensional structure of the table.
Executor (Section 3.3), which executes the query against the table and outputs annotations that encode intermediate execution results, which are stored in the memory of each layer to be accessed by subsequent executor. Our basic assumption is that complex, compositional queries can be answered through multiple steps of computation, where each executor models a speciï¬c type of operation conditioned on the query. Figure 1 illustrates the operation each executor is supposed to perform in answering ËQ. Diï¬erent from classical semantic parsing approaches which require a predeï¬ned set of all possible logical operations, Neural Enquirer is capable of learning the logic of executors via end-to-end training using Query-Answer pairs. By stacking several executors, Neural Enquirer is able to answer complex queries involving multiple steps of computation.
# 3 Model
In this section we give a more detailed exposition of diï¬erent types of components in the Neural Enquirer model.
# 3.1 Query Encoder | 1512.00965#7 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 8 | # 3 Model
In this section we give a more detailed exposition of diï¬erent types of components in the Neural Enquirer model.
# 3.1 Query Encoder
Given an NL query Q composed of a sequence of words {w1, w2, . . . , wT }, Query Encoder parses Q into a dQ-dimensional vectorial representation q: Q encodeâââââ q â RdQ. In our imple- mentation of Neural Enquirer, we employ a bidirectional RNN for this mission1. More speciï¬cally, the RNN summarizes the sequence of word embeddings of Q, {x1, x2, . . . , xT }, into a vector q as the representation of Q, where xt = L[wt], xt â RdW and L is the embedding matrix. See Appendix A for details.
It is worth noting that our Query Encoder can ï¬nd the representation of rather general class of symbol sequences, agnostic to the actual representation of queries (e.g., natural lan- guage, SQL-like, etc). Neural Enquirer is capable of learning the execution logic expressed in the input query through end-to-end training, making it a generic model for query execution.
# 3.2 Table Encoder | 1512.00965#8 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 9 | # 3.2 Table Encoder
Table Encoder converts a knowledge-base table T into its distribu- tional representation as input to Neural Enquirer. Suppose the table has M rows and N columns, where each column comes with a ï¬eld name (e.g., host city), and the value of each table entry is a word (e.g., Beijing) in our vocabulary, Table Encoder ï¬rst ï¬nds
composite embedding 4 â fiedembedding value embedding
1Other choices of sentence encoder such as LSTM or even convolutional neural networks are possible too
3
query embedding Memory Layer-( table embedding 7 I i i i i row reading H GEES | GEES | GEE | GE a oo ' Good | ooo | ooo | oo >| Reader Jp EEE â>! annotator [=> a4 ' 7 | Ge | ee | â weceedonenen i Gann | Ge | Ge | ee i pooling; I 1 i} 1 I Memory Layer-((-1)
Figure 2: Overview of an Executor-@ | 1512.00965#9 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 10 | Figure 2: Overview of an Executor-@
the embedding for ï¬eld names and values of table, and then it computes the (ï¬eld, value) composite embedding for each of the M à N entries in the table. More speciï¬cally, for the entry in the m-th row and n-th column with a value of wmn, Table Encoder computes a dE -dimensional embedding vector emn by fusing the embedding of the entry value with the embedding of its corresponding ï¬eld name as follows:
emn = DNN0([L[wmn]; fn]) = tanh(W · [L[wmn]; fn] + b)
where fn is the embedding of the ï¬eld name (of the n-th column). W and b denote the weight matrices, and [·; ·] the concatenation of vectors. The output of Table Encoder is a tensor of shape M à N à dE , consisting of M à N embeddings of length dE for all entries. | 1512.00965#10 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 11 | Our Table Encoder functions diï¬erently from classical knowledge embedding models (e.g., TransE [4]), where embeddings of entities (entry values) and relations (ï¬eld names) are learned in a unsupervised fashion via minimizing certain reconstruction errors. Embeddings in Neu- ral Enquirer are optimized via supervised learning towards end-to-end QA tasks. Addi- tionally, as will shown in the experiments, those embeddings function in a way as indices, which not necessarily encode the exact semantic meaning of their corresponding words.
# 3.3 Executor
Neural Enquirer executes an input query on a KB table through layers of execution. Each layer of executor captures a certain type of operation (e.g., select, where, max, etc.) relevant to the input query2, and returns intermediate execution results, referred to as annotations, saved in an external memory of the same layer. A query is executed step-by-step through a sequence of stacked executors. Such a cascaded architecture enables Neural Enquirer to answer complex, compositional queries. An illustrative example is given in Figure 1, with each executor annotated with the operation it is assumed to perform. We will demonstrate in Section 5 that Neural Enquirer is capable of learning the operation logic of each executor via end-to-end training. | 1512.00965#11 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 12 | As illustrated in Figur neural network components: Reader and Annotator. An executor processes a table row-by- row. For the m-th row, with N (field, value) composite embeddings Rm = {@m1,@m2,---;@mNn}, the Reader fetches a read vector ro from Rm, which is sent to the Annotator to compute a an executor at Layer- (denoted as Executor-¢) has two major
2Depending on the query, an executor may perform diï¬erent operations.
4
read vector O11 A host city |# participants] _# models TH) I query embedding bo een t t DNNo DN DNN, DNN, row k year host city # participants # models table annotation D1
Figure 3: Illustration of the Reader for Executor-é.
row annotation al, ⬠Ra;
Read Vector:
# ro, = af, =
f!(Rm,Fr,q,M~!) ff(ro,,q, Mâ¢!)
Row Annotation: (2)
where Mâ~! denotes the content in memory Layer-(¢â-1), and Fr = {f1, fo,..., fv} is the set of field name embeddings. Once all row annotations are obtained, Executor-¢ then generates the table annotation through the following pooling process: | 1512.00965#12 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 13 | Table Annotation: gf = fPoor a{,as, .,a4y)A row annotation captures the local execution result on each row, while a table annotation, derived from all row annotations, summarizes the global computational result on the whole able. Both row annotations {a{,a$,...,a4,} and table annotation g* are saved in memory Layer-é: M* = {a{,a$,...,a4,, 8°}.
Our design of executor is inspired by Neural Turing Machines [6], where data is fetched from an external memory using a read head, and subsequently processed by a controller, whose outputs are ï¬ushed back in to memories. An executor functions similarly by reading data from each row of the table, using a Reader, and then calling an Annotator to calculate inter- mediate computational results as annotations, which are stored in the executorâs memory. We assume that row annotations are able to handle operations which require only row-wise, local information (e.g., select, where), while table annotations can model superlative operations (e.g., max, min) by aggregating table-wise, global execution results. Therefore, a combination of row and table annotations enables Neural Enquirer to capture a variety of real-world query operations.
# 3.3.1 Reader | 1512.00965#13 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 14 | # 3.3.1 Reader
As illustrated in Figure|3| an executor at Layer-@ reads in a vector ro for each row m, defined as the weighted sum of composite embeddings for entries in this row:
N rin = fa(Rms Fria, MT) = S>O(fn, a. 8° )emn n=1
5
(1)
where ËÏ(·) is the normalized attention weights given by:
£-1 olf, q.g) = _xPwltn ag ) (3) Via1 exP(w(fr, a, 8°)
and w(-) is modeled as a DNN (denoted as DN nO),
1 ).
Note that the ËÏ(·) is agnostic to the values of entries in the row, i.e., in an executor all rows share the same set of weights ËÏ(·). Since each executor models a speciï¬c type of computation, it should only attend to a subset of entries pertain to its execution, which is modeled by the Reader. This is related to the content-based addressing of Neural Turing Machines [6] and the attention mechanism in neural machine translation models [2].
# 3.3.2 Annotator | 1512.00965#14 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 15 | # 3.3.2 Annotator
In Executor-@, the Annotator computes row and table annotations based on the fetched read vector ro of the Reader, which are then stored in the ¢-th memory layer M accessible to Executor-((+1). This process is repeated in intermediate layers, until the executor in the last layer to finally generate the answer.
Row annotations A row annotation encodes the local computational result on a specific row. As illustrated in Figure [4] a row annotation for row m in Executor-f, given by
£ elt e-1 2) tye e-1, f-1 an = f(a MI) = DNS? ((rhns dian 84). (4)
fuses the corresponding read vector ro, the results saved in previous memory layer (row and table annotations al L g), and the query embedding q. Basically,
é-1 e row annotation a), ~ represents the local status of the execution before Layer-0;
e table annotation g!~! summarizes the global status of the execution before Layer-é;
e read vector râ, stores the value of current attention;
⢠query embedding q encodes the overall execution agenda,
all of which are combined through DN Ni to form the annotation of row m in the current layer. | 1512.00965#15 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 16 | ⢠query embedding q encodes the overall execution agenda,
all of which are combined through DN Ni to form the annotation of row m in the current layer.
query embedding table annotation (layer (-1) h . m⢠row annotation (layer () CLOOTT) m' row annotation (layer (-1) read vector (layer ()
Figure 4: Illustration of Annotator for Executor-@.
6
Table annotations Capturing the global execution state, a table annotation is summarized from all row annotations via a global pooling operation. In our implementation of Neural Enquirer we employ max pooling:
g = froor(aj, a5, sees aly) = [91 92; tee Gag)" (5)
where gj, = max({aj(k),a$(k),...,a4,(k)}) is the maximum value among the k-th elements of all row annotations. It is possible to use other pooling operations (e.g., gated pooling), but we find max pooling yields the best results.
# 3.3.3 Last Layer of Executor
Instead of computing annotations based on read vectors, the last executor in Neural En- quirer directly outputs the probability of the value of each entry in T being the answer: | 1512.00965#16 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 17 | Instead of computing annotations based on read vectors, the last executor in Neural En- quirer directly outputs the probability of the value of each entry in T being the answer:
| exp(fAyg(mns aft, g)) P(WmnlQ, T) M N 2 tl of-1 Vn=1 n=l exp(fans (Emin; Ga, 58 )) (6)
where fâ,g(-) is modeled as a DNN. Note that the last executor, which is devoted to returning answers, carries out a specific kind of execution using Fans) based on the entry value, the query, and annotations from previous layer.
# 3.4 Handling Multiple Tables
Real-world KBs are often modeled by a schema involving various tables, where each table stores a speciï¬c type of factual information. We present Neural Enquirer-M, adapted for simultaneously operating on multiple KB tables. A key challenge in this scenario is that the multiplicity of tables requires modeling interaction between them. For example, Neural Enquirer-M needs to serve join queries, whose answer is derived by joining ï¬elds in diï¬erent tables. Details of the modeling and experiments of Neural Enquirer-M are given in Appendix C.
# 4 Learning | 1512.00965#17 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 18 | # 4 Learning
Neural Enquirer can be trained in an end-to-end (N2N) fashion in Question Answering tasks. During training, both the representations of queries and table entries, as well as the execution logic captured by weights of executors are learned. More speciï¬cally, given a set of ND query-table-answer triples D = {(Q(k), T (i), y(i))}, we optimize the model parameters by maximizing the log-likelihood of gold-standard answers:
Np Lyon (D) = YF log p(y = wmnlQ?,T) (7) i=1
In end-to-end training, each executor discovers its operation logic from training data in a purely data-driven manner, which could be diï¬cult for complicated queries requiring four or ï¬ve sequential operations.
This can be alleviated by softly guiding the learning process via controlling the attention weights Ëw(·) in Eq. (3). By enforcing Ëw(·) to bias towards a ï¬eld pertain to a speciï¬c operation,
7 | 1512.00965#18 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 19 | 7
we can âcoerceâ the executor to ï¬gure out the logic of this particular operation relative to the ï¬eld. As an example, for Executor-1 in Figure 1, by biasing the weight of the host city ï¬eld towards 1.0, only the value of host city ï¬eld will be fetched and sent for computing annotations, in this way we can force the executor to learn to ï¬nd the row whose host city is Beijing. This setting will be referred to as step-by-step (SbS) training. Formally, this is done by introducing additional supervision signal to Eq. (7):
Np L Lsus(D) = > flog ply = wainlQ®,T) +0) log (Ff, (8) i=l é=1
where a is a scalar and ff, is the embedding of the field name known a priori to be relevant to the executor at Layer-¢ in the k-th example.
# 5 Experiments
In this section we evaluate Neural Enquirer on synthetic QA tasks with queries with vary- ing compositional depth. We will ï¬rst brieï¬y describe our synthetic QA task for benchmark and experimental setup, and then discuss the results under diï¬erent settings. | 1512.00965#19 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 20 | # 5.1 Synthetic QA Task
We present a synthetic QA task to evaluate the performance of Neural Enquirer, where a large amount of QA examples at various levels of complexity are generated to evaluate the single table and multiple tables cases of the model. Starting with âartiï¬cialâ tasks eases the process of developing novel deep models [14], and has gained increasing popularity in recent advances of the research on modeling symbolic computation using DNNs [6, 15]. | 1512.00965#20 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 21 | Our synthetic dataset consists of query-table-answer triples {(Q(i), T (i), y(i))}. To generate such a triple, we ï¬rst randomly sample a table T (i) of size 10 à 10 from a synthetic schema of Olympic Games, which has 10 ï¬elds, whose values are drawn from a vocabulary of size 240, with 120 country and city names, and 120 numbers. Figure 5 gives an example table with one row. Next, we generate a query Q(i) using predeï¬ned templates associated with its gold-standard answer y(i) on T (i). Our task consists of four types of natural language queries as summarized in Table 1, with annotated SQL-like logical forms for easy interpretation. We generate NL queries at various levels of compositionality to mimic the real world scenario. The complexity of those queries ranges from simple Select Where queries to more complicated Nest ones involving multiple steps of computation. Those queries are ï¬exible enough to involve complex matching between NL phrases and logical constituents, which makes query understanding and execution nontrivial: (1) the same ï¬eld is described by | 1512.00965#21 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 22 | matching between NL phrases and logical constituents, which makes query understanding and execution nontrivial: (1) the same ï¬eld is described by diï¬erent NL phrases (e.g., âHow big is the country ...â and âWhat is the size of the country ...â for country size ï¬eld); (2) diï¬erent ï¬elds may be referred to by the same NL pattern (e.g, âin Chinaâ for country and âin Beijingâ for host city); (3) simple NL constituents may be grounded to complex logical operations (e.g., âafter the game in Beijingâ implies comparing between year ï¬elds). In our experiments we use the above procedure to generate benchmark datasets consisting of diï¬erent types of queries. To make the artiï¬cial task harder, we enforce that all queries in the testing set do not appear in the training set. | 1512.00965#22 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 24 | Query Type Example Queries with Annotated SQL-like Logical Forms > Qi: How many people participated in the game in Beijing? F\: select #_participants, where host_city = Beijing > Qa: In which country was the game hosted in 2012? F2: select host_country, where year = 2012 > Q3: When was the lastest game hosted? F3: argmax(host_city, year) > Qa: How big is the country which hosted the shortest game? F4: argmin(country_size, # duration) > Qs: How long is the game with the most medals that has fewer than 3,000 participants? F5: where #_participants < 3,000, argmax(#_duration, #medals) > Qe: How many medals are in the first game after 2008? Fe: where #-year > 2008, argmin(#medals, #_year) > Q7: Which country hosted the longest game before the game in Athens? F7: where year<(select year,where host_city=Athens) ,argmax(host_country,#_duration) NEST > Qs: How many people watched the earliest game that lasts for more days than the game in 1956? Fg: where | 1512.00965#24 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 26 | > Qs: How many people watched the earliest game that lasts for more days than the game in 1956?
Table 1: Example queries in our synthetic QA task
training examples respectively, where four types of quires are sampled with the ratio 1 : 1 : 1 : 2. Both datasets share the same testing set of 20K examples, 5K for each type of query.
# 5.2 Setup
Tuning] We use a NEURAL ENQUIRER with five executors. The number of layers for DN Ni and Dung? are set to 2 and 3, respectively. We set the dimensionality of word/entity em- eddings and row/table annotations to 20, hidden layers to 50, and the hidden states of the GRU in query encoder to 150. a in Eq. (8) is set to 0.2. We pad the beginning of all input queries to a fixed size. | 1512.00965#26 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 27 | Neural Enquirer is trained via standard back-propagation. Objective functions are op- timized using SGD in a mini-batch of size 100 with adaptive learning rates (AdaDelta [16]). The model converges fast within 100 epochs. [Baseline] We compare our model with Sempre [11], a state-of-the-art semantic parser. [Metric] We evaluate the performance of Neural Enquirer and Sempre (baseline) in terms of accuracy, deï¬ned as the fraction of correctly answered queries.
# 5.3 Main Results
Table 2 summarizes the results of Sempre (baseline) and our Neural Enquirer under end- to-end (N2N) and step-by-step (SbS) settings. We show both the individual performance for each type of query and the overall accuracy. We evaluate Sempre only on Mixtured-25K because of its long training time even on the smaller Mixtured-25K (> 3 days). We give discussion of eï¬ciency issues in Appendix B.
We ï¬rst discuss Neural Enquirerâs performance under end-to-end (N2N) training set- ting (the 3rd and 6th column in Table 2), and defer the discussion for SbS setting to Sec9 | 1512.00965#27 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 28 | Mixtured-25K Mixtured-100K Select Where Superlative Where Superlative Nest Overall Acc. Sempre 93.8% 97.8% 34.8% 34.4% 65.2% N2N 96.2% 98.9% 80.4% 60.5% 84.0% SbS 99.7% 99.5% 94.3% 92.1% 96.4% N2N - OOV 90.3% 98.2% 79.1% 57.7% 81.3% N2N 99.3% 99.9% 98.5% 64.7% 90.6% SbS 100.0% 100.0% 99.8% 99.7% 99.9% N2N - OOV 97.6% 99.7% 98.0% 63.9% 89.8%
# Table 2: Accuracies on Mixtured datasets
Q5: How long is the game with the most medals that has fewer than 3,000 participants?
Executor-1 Executor-2 Executor-3 Executor-4 Executor-5 0.0 FAP PSEOS LS SSOP S LOSES FEF ESSN CIE GER GE PT OF SS PFE SS ayeâ SS STE? SS * cS é a? âs é | 1512.00965#28 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 29 | Figure 6: Weights visualization of query Q5
tion 5.4. On Mixtured-25K, our model outperforms Sempre on all types of queries, with a marginal gain on simple queries (Select Where, Superlative), and signiï¬cant improve- ment on complex ones (Where Superlative, Nest). When the size of training set grows (Mixtured-100K), Neural Enquirer achieves near 100% accuracy for the ï¬rst three types of queries, while registering a decent overall accuracy of 90.6%. These results suggest that our model is very eï¬ective in answering compositional natural language queries, especially those with complex semantic structures compared with the state-of-the-art system. | 1512.00965#29 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 30 | To further understand why our model is capable of handling compositional queries, we study the attention weights Ëw(·) of Readers (Eq. 3) for executors in intermediate layers, and the answer probability (Eq. 6) the last executor outputs for each entry in the table. Those statistics are obtained from the model trained on Mixtured-100K. We sampled two queries (Q5 and Q7 in Table 1) in the dataset that our model answers correctly and visualized their corresponding values, as illustrated in Figure 6 and 7, respectively. We ï¬nd that each executor actually learns its execution logic from just the correct answers in end-to-end training, which corresponds with our assumption. For Q5, the model executes the query in three steps, with each of the last three executors performs a speciï¬c type of operation. For each row, Executor-3 takes the value of the # participants ï¬eld as input and computes intermediate annotations, while Executor-4 focuses on the # medals ï¬eld. Finally, the last executor outputs high probability for the # duration ï¬eld (the 5-th | 1512.00965#30 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 31 | focuses on the # medals ï¬eld. Finally, the last executor outputs high probability for the # duration ï¬eld (the 5-th column) in the 3-rd row. The attention weights for Executor-1 and Executor-2 appear to be meaningless because Q5 requires only three steps of execution, and the model learns to defer the meaningful execution to the last three executors. We can guess conï¬dently that in executing Q5, Executor-3 performs the conditional ï¬ltering operation (where clause in F5), and Executor-4 performs the ï¬rst part of argmax (ï¬nding the maximum value of # medals), while the last executor ï¬nishes the execution by assigning high probability for the # duration ï¬eld of the row with the maximum value of # medals. | 1512.00965#31 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 32 | Compared with the relatively simple Q5, Q7 is more complicated, whose logical form F7 involves a nest sub-query, and requires ï¬ve steps of execution. From the weights visualized in ï¬gure 7, we can ï¬nd that the last three executors function similarly as the case in answering
10
Q7: Which country hosted the longest game before the game in Athens?
Executor-1 Executor-2 Executor-3 Executor-4 Executor-5 1 1.9, 1 1.09, a 0.8} 0.8} ol 0.8} 0.6} 0.6} os} 0.6} o.4} o.4} 0.4] 0.4} 02 02 02 02 Oe ee OM asst wee ee SESS EE OLS TE KELL EE e es 5 < OBS x é SS Kh Se om < FFFEFES FFP FFP PES FFE SAME SEM EPS EF ow OH CAS Se CHS SS CAS SX SHS SS s ahd 8 re ad Perea BOR aS? S Wwe a WSâ s S weâ os a Weâ os
Figure 7: Weights visualization of query Q7
Q8: How many people watched the earliest game that lasts for more days than the game in 1956? | 1512.00965#32 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 33 | Figure 7: Weights visualization of query Q7
Q8: How many people watched the earliest game that lasts for more days than the game in 1956?
Executor-1 Executor-2 Executor-3 Executor-4 Executor-5 1 1.09, 1.9, 109, 0.8} 0.8} 08 og} 0.6} 0.6} 06 o.g| 0.4} o.4} 0.4, 0.4} 02 02 02 02 9 eee eee Od og «og oll 5 eg PLS ES eS ES LOS LS âFSO SES CSS CS OR SESS So og FSFE FES EF FFP PIS ESE EGE SM PS EWE Sow Cw SO! SS SAE SS SSS SS SS f PEE SE Pee SEF Fee SF MIEâ SE * â * *
# Figure 8: Weights visualization of query Q8 (an incorrectly answered query)
Q5, yet the execution logic for the ï¬rst two executors is a bit obscure. We posit that this is because during end-to-end training, the supervision signal propagated from the top layer has decayed along the long path down to the ï¬rst two executors, which causes vanishing gradient problem. | 1512.00965#33 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 34 | We also investigate the case where our model fails to deliver the correct answer for compli- cated queries. Figure 8 gives such a query Q8 in Table 1 together with its visualized weights. Similar as Q7, Q8 requires ï¬ve steps of execution. Besides messing up the weights in the ï¬rst two executors, the last executor, Executor-5, predicts a wrong entry as the query answer, instead of the highlighted (in red rectangle) correct entry.
# 5.4 With Additional Step-by-Step Supervision
To alleviate the vanishing gradient problem when training on complex queries as described in Section 5.3, in our next set of experiments we trained our Neural Enquirer model using step-by-step (SbS) training (Eq. 8), where we encourage each executor to attend to a speciï¬c ï¬eld that is known a priori to be relevant to its execution logic. The results are shown in the 4-rd and 7-th columns of Table 2. With stronger supervision signal, the model signiï¬cantly outperforms the results in end-to-end setting, and achieves near 100% accuracy on all types of queries, which shows that our proposed Neural Enquirer is capable of leveraging the additional supervision signal given to intermediate layers in SbS training setting, and answering complex and compositional queries with perfect accuracy. | 1512.00965#34 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 35 | Let us revisit the query Q7 in SbS setting with the weights visualization in Figure 9. In contrast to the result in N2N setting (Figure 7) where the attention weights for the ï¬rst two executors are obscure, the weights in every executor are perfectly skewed towards the actual ï¬eld pertain to each layer of execution (with a weight 1.0). Quite interestingly, the attention weights for Executor-3 and Executor-4 are exactly the same with the result in N2N setting, while the weights for Executor-1 and Executor-2 are signiï¬cantly diï¬erent, suggesting Neural Enquirer learned a diï¬erent execution logic in the SbS setting.
11
Q7: Which country hosted the longest game before the game in Athens? | 1512.00965#35 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 36 | 11
Q7: Which country hosted the longest game before the game in Athens?
Executor-1 Executor-2 Executor-3 Executor-4 Executor-5 1 109, 1.9, 1.09, 0.8} og} ol 0.8} 0.6} o.g} os} 0.6} 0.4} 0.4} 0.4] 0.4} 02 02 02 02 FICS SESSLE FICSSESSES SECSEESSES FEESSESS ES LEGION I EM I ES GE SOâ SE! SP SNS SO ST SO SNS SS eS FS a WSs i; Wes aS Wes
Figure 9: Weights visualization of query Q7 in step-by-step training setting
# 5.5 Dealing with Out-Of-Vocabulary Words
Q9: How many people watched the game in Macau?
Qo: many people game Executor-1 Executor-2 Executor-3 Executor-4 Executor-5 1.09, 1.9, 1.09, 0.8} 0.8} 08, 0.8} 0.6} 0.6} 06 0.6} = SSOP SPOS LK EEEFS SEES LE BSE 8 BRIS. SS aS HLSâ OS
1
# Figure 10: Weights visualization of query Q9 | 1512.00965#36 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 37 | 1
# Figure 10: Weights visualization of query Q9
One of the major challenges for applying neural network models to NLP applications is to deal with Out-Of-Vocabulary (OOV) words, which is particularly severe for QA. It is hard to cover existing tail entities, while at the same time new entities appear in user-issued queries and back-end KBs everyday. Quite interestingly, we ï¬nd that a simple variation of Neural Enquirer is able to handle unseen entities almost without any loss of accuracy.
Basically, we divide words in the vocabulary into entity words and operation words. Em- beddings of entity words (e.g., Beijing, China) function in a way as index to facilitate the matching between entities in queries and tables during the layer-by-layer execution, and do not need to be updated once initialized; while those of operation words, i.e., all non-entity words (e.g., numbers, longest, before, etc) carry semantic meanings relevant to execution and should be optimized during training. Therefore, after randomly initializing the embedding matrix L, we only update the embeddings of operation words in training, while keeping those of entity words unchanged. | 1512.00965#37 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 38 | To test the modelâs performance with OOV words, we modify queries in the testing portion of the Mixtured dataset to replace all entity words (i.e., all country and city names) with OOV ones3 unseen in the training set. Results obtained using N2N training are summarized in the 5th and 8th columns of Table 2. As it shows Neural Enquirer training in this OOV setting yields performance comparable to that in the non-OOV setting, indicating that operation words and entity words play diï¬erent roles in query execution.
An interesting question to investigate in this OOV setting is how Neural Enquirer distinguishes between diï¬erent types of entity words (i.e., cities and countries) in queries, since their embeddings are randomly initialized and ï¬xed thereafter. An example query is Q9 : âHow many people watched the game in Macau?â, where Macau is an OOV entity. To
3They also have embeddings in L.
12
Query Type Select Where Superlative Where Superlative Overall 77.7% 84.8% Accuracy
Table 3: Accuracies for large knowledge source simulation | 1512.00965#38 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 39 | 12
Query Type Select Where Superlative Where Superlative Overall 77.7% 84.8% Accuracy
Table 3: Accuracies for large knowledge source simulation
Training Testing #_audience | host_city #_audience | host_city | year | #_participants 75,000 | Beijing 65,000 | Beijing | 2008 2,000 How many audience members are in Beijing? When was the game in Beijing? year | # participants #_audience | host_city | year | #_participants 2008 2,500 50,000 London | 2012 3,000
When was the game with 2,500 participants? | How many people watched the game with 3,000 participants?
Figure 11: Large knowledge source simulation
help understand how the model knows Macau is a city, we give its weights visualization in Figure 10. Interestingly, the model ï¬rst checks the host city ï¬eld in Executor-3, and then host country in Executor-4, which suggests that the model learns to scan all possible ï¬elds where the OOV entity may belong to.
# 5.6 Simulating Large Knowledge Source | 1512.00965#39 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 40 | An important direction in semantic parsing research is to scale to large knowledge source [3, In this set of experiments we simulate a test case to evaluate Neural Enquirerâs 11]. ability to generalize to large knowledge source. We train a model on tables whose ï¬eld sets are either F1, F2, . . . , F5, where Fi (with |Fi| = 5) is a subset of the entire set FT . We then test the model on tables with all ï¬elds FT and queries whose ï¬elds span multiple subsets Fi. Figure 11 illustrates the setting. Note that all testing queries exhibit ï¬eld combinations unseen in the training data, to mimic the diï¬culty the system often encounter when scaling to large knowledge source, which usually poses great challenge on modelâs generalization ability. We then train and test the model only on a new dataset of the ï¬rst three types of relatively simple queries (namely Select Where, Superlative and Where Superlative). The sizes of training/testing splits are 75,000 and 30,000, with equal numbers for diï¬erent query types. Table 3 lists the results. Neural Enquirer still maintains a reasonable performance even | 1512.00965#40 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 41 | 75,000 and 30,000, with equal numbers for diï¬erent query types. Table 3 lists the results. Neural Enquirer still maintains a reasonable performance even when the compositionality of testing queries is previously unseen, showing the modelâs generalization ability in tackling unseen query patterns through composition of familiar ones, and hence the potential to scale to larger and unseen knowledge sources. | 1512.00965#41 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 42 | # 6 Related Work
Our work falls into the research area of Semantic Parsing, where the key problem is to parse Natural Language queries into logical forms executable on KBs. Classical approaches for Semantic Parsing can be broadly divided into two categories. The ï¬rst line of research resorts to the power of grammatical formalisms (e.g., Combinatory Categorial Grammar) to parse NL queries and generate corresponding logical forms, which requires curated/learned lexicons deï¬ning the correspondence between NL phrases and symbolic constituents [17, 7, 1, 18]. The
13 | 1512.00965#42 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 43 | 13
model is tuned with annotated logical forms, and is capable of recovering complex semantics from data, but often constrained on a speciï¬c domain due to scalability issues brought by the crisp grammars and the lack of annotated training data. Another line of research takes a semi-supervised learning approach, and adopts the results of query execution (i.e., answers) as supervision signal [5, 3, 10, 11, 12]. The parsers, designed towards this new learning paradigm, take diï¬erent types of forms, ranging from generic chart parsers [3, 11] to more speciï¬cally engineered, task-oriented ones [12, 8]. Semantic parsers in this category often scale to open domain knowledge sources, but lack the ability of understanding compositional queries because of the intractable search space incurred by the ï¬exibility of parsing algorithms. Our work follows this line of research in using query answers as indirect supervision to facilitate end-to-end training using QA tasks, but performs semantic parsing in distributional spaces, where logical forms are âneuralizedâ to an executable distributed representation. | 1512.00965#43 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 44 | Our work is also related to the recent advances of modeling symbolic computation using Deep Neural Networks. Pioneered by the development of Neural Turing Machines (NTMs) [6], this line of research studies the problem of using diï¬erentiable neural networks to perform âhardâ symbolic execution. As an independent line of research with similar ï¬avor, Zaremba et al. [15] designed a LSTM-RNN to execute simple Python programs, where the parameters are learned by comparing the neural network output and the correct answer. Our work is related to both lines of work, in that like NTM, we heavily use external memory and ï¬exible way of processing (e.g., the attention-based reading in the operations in Reader) and like [15], Neural Enquirer learns to execute a sequence with complicated structure, and the model is tuned from the executing them. As a highlight and diï¬erence from the previous work, we have a deep architecture with multiple layer of external memory, with the neural network operations highly customized to querying KB tables. | 1512.00965#44 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 45 | Perhaps the most related work to date is the recently published Neural Programmer proposed by Neelakantan et al. [9], which studies the same task of executing queries on tables with Deep Neural Networks. Neural Programmer uses a neural network model to select operations during query processing. While the query planning (i.e., which operation to execute at each time step) phase is modeled softly using neural networks, the symbolic operations are predeï¬ned by users. In contrast Neural Enquirer is fully distributional: it models both the query planning and the operations with neural networks, which are jointly optimized via end- to-end training. Our Neural Enquirer model learns symbolic operations using data-driven approach, and demonstrates that a fully neural, end-to-end diï¬erentiable system is capable of modeling and executing compositional arithmetic and logic operations upto certain level of complexity.
# 7 Conclusion and Future Work | 1512.00965#45 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 46 | # 7 Conclusion and Future Work
In this paper we propose Neural Enquirer, a fully neural, end-to-end diï¬erentiable network that learns to execute queries on tables. We present results on a set of synthetic QA tasks to demonstrate the ability of Neural Enquirer to answer fairly complicated compositional queries across multiple tables. In the future we plan to advance this work in the following directions. First we will apply Neural Enquirer to natural language questions and natural language answers, where both the input query and the output supervision are noisier and less informative. Second, we are going to scale to real world QA task as in [11], for which we have to deal with a large vocabulary and novel predicates. Third, we are going to work
14
on the computational eï¬ciency issue in query execution by heavily borrowing the symbolic operation.
# References
[1] Y. Artzi, K. Lee, and L. Zettlemoyer. Broad-coverage ccg semantic parsing with amr. In EMNLP, pages 1699â1710, 2015.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. | 1512.00965#46 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 47 | [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
[3] J. Berant, A. Chou, R. Frostig, and P. Liang. Semantic parsing on freebase from question-answer pairs. In EMNLP, pages 1533â1544, 2013.
[4] A. Bordes, N. Usunier, A. Garca-Durn, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pages 2787â2795, 2013.
[5] D. L. Chen and R. J. Mooney. Learning to sportscast: a test of grounded language acquisition. In ICML, pages 128â135, 2008.
[6] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
[7] T. Kwiatkowski, E. Choi, Y. Artzi, and L. S. Zettlemoyer. Scaling semantic parsers with on-the-ï¬y ontology matching. In EMNLP, pages 1545â1556, 2013. | 1512.00965#47 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 48 | [8] D. K. Misra, K. Tao, P. Liang, and A. Saxena. Environment-driven lexicon induction for high-level instructions. In ACL (1), pages 992â1002, 2015.
[9] A. Neelakantan, Q. V. Le, and I. Sutskever. Neural Programmer: Inducing Latent Programs with Gradient Descent. ArXiv e-prints, Nov. 2015.
[10] P. Pasupat and P. Liang. Zero-shot entity extraction from web pages. In ACL (1), pages 391â401, 2014.
[11] P. Pasupat and P. Liang. Compositional semantic parsing on semi-structured tables. In ACL (1), pages 1470â1480, 2015.
[12] W. tau Yih, M.-W. Chang, X. He, and J. Gao. Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL (1), pages 1321â1331, 2015.
[13] T.-H. Wen, M. Gasic, N. Mrksic, P. hao Su, D. Vandyke, and S. J. Young. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP, pages 1711â1721, 2015. | 1512.00965#48 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 49 | [14] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015.
[15] W. Zaremba and I. Sutskever. Learning to execute. CoRR, abs/1410.4615, 2014.
[16] M. D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012.
[17] L. S. Zettlemoyer and M. Collins. Learning to map sentences to logical form: Structured classiï¬- cation with probabilistic categorial grammars. In UAI, pages 658â666, 2005.
[18] L. S. Zettlemoyer and M. Collins. Online learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL, pages 678â687, 2007.
15
# A Computation of Query Encoder | 1512.00965#49 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 50 | 15
# A Computation of Query Encoder
We use a bidirectional RNN as the Query Encoder, which consists of a forward GRU and a backward GRU. Given the sequence of word embeddings of Q: {x1, x2, . . . , xT }, at each time step t, the forward GRU computes the hidden state ht as follows:
ht = zthtâ1 + (1 â zt)Ëht Ëht = tanh(Wxt + U(rt ⦠htâ1)) zt = Ï(Wzxt + Uzhtâ1) rt = Ï(Wrxt + Urhtâ1)
where W, Wz, Wr, U, Uz, Ur are parametric matrices, 1 the column vector of all ones, and ⦠element-wise multiplication. The backward GRU reads the sequence in reverse order. We concatenate the last hidden states given by the two GRUs as the vectorial representation q of the query.
# B Eï¬ciency of Model Learning | 1512.00965#50 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 51 | # B Eï¬ciency of Model Learning
We compared the eï¬ciency of Neural Enquirer and Sempre (baseline) in training by plotting the accuracy on testing data by training time. Figure 12 illustrates the results. We train Neural Enquirer-CPU and Sempre on a machine with Intel Core [email protected] and 16GB memory, while Neural Enquirer-GPU is tuned on Nvidia Tesla K40. Neural Enquirer-CPU is 10 times faster than Sempre, and Neural Enquirer-GPU is 100 times faster.
0.9, 0.8] ec es 2 2 8 G2 & 5 8 Accuracy on Testing Data 2 ++ Neural Enquirer-CPU 01 4-4 Neural Enquirer-GPU eâe Sempre-CPU 0.0) â10° 10? 10° 10° 10° 10° Training Time (s)
Figure 12: Accuracy on Testing by Training Time
# C Handling Multiple Tables | 1512.00965#51 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 52 | Figure 12: Accuracy on Testing by Training Time
# C Handling Multiple Tables
C.1 NEURAL ENQUIRER-M Model Basically, NEURAL ENQUIRER-M assigns an executor to each table 7; in every execution layer @, denoted as Executor-(¢, k). Figure[13] pictorially illustrates Executor-(¢, 1) and Executor-(¢, A) working on Table-1 and Table-K respectively. Within each executor, the Reader is designed the same way as single table case, while we modify the Annotator to let in the information from other tables. More specifically, for Executor-(¢,k), we modify its Annotator by extending Eq. to leverage computational
16
Memory Layer- query embedding Embedding of Table-1 ' row annotations i t i Annotator |[â>} i 1 i 1 read vectors Annotator |C Memory Layer-((-1)
Figure 13: Executor-(¢, 1) and Executor-(¢, â) in multiple tables case
results from other tables when computing the annotation for the m-th row: | 1512.00965#52 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 53 | Figure 13: Executor-(¢, 1) and Executor-(¢, â) in multiple tables case
results from other tables when computing the annotation for the m-th row:
£ Lye e-1 ob- 1 a 1 sé-1 akm = Fx (Phys Ws Ap, wmSk > Ak ms Sk ) L 1, af 1, gé-1 = DNN§ (imi ai a, a, miBe Ani Bq |)
])
This process is illustrated in Figure Note that we add subscripts k ⬠[1, K] to the notations to index tables. To model the interaction between tables, the Annotator incorporates the ârelevantâ row annotation, at. and the ârelevantâ table annotation, gs, derived from the previous execution results of other tables when computing the current row annotation.
A relevant row annotation stores the data fetched from row annotations of other tables, while a relevant table annotation summarizes the table-wise execution results from other tables. We now describe how to compute those annotations. First, for each table T,, kâ 4 k, we fetch a relevant row annotation any a from all row annotations fair} um! ,} of Ty, via attentive reading: | 1512.00965#53 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 54 | Myr db e-1 e- oe-d t-1 exp(7 (hms Aim Akemi Be Be )) 1 Ak km » My 10-1 ,f-1 ,t- Dy abime ¢ mat Dom!=1 XP(VW(T hm Ws Mem? Berm? Be Bier
Intuitively, the attention weight 7(-) (modeled by a DNN) captures how important the m/-th row annotation from table Tx, an , is with respect to the current step of execution. After getting the set of row annotations fetched from all other tables, {ajay k. nba, ki¢k> We then compute ay, y, and a 1 via a pooling operatior{] on {Ake mb bar nee and {ett Heer peek
(fn Be!) = fooo. {Bem Akan Bm PLB Be hs Bie D
In summary, relevant row and table annotations encode the local and global computational results from other tables. By incorporating them into calculating row annotations, Neural Enquirer-M is capable of answering queries that involve interaction between multiple tables.
Finally, Neural Enquirer-M outputs the ranked list of answer probabilities by normalizing the
value of g(·) in Eq. (6) over each entry for very table.
4This operation is trivial in our experiments on two tables.
17 | 1512.00965#54 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 55 | value of g(·) in Eq. (6) over each entry for very table.
4This operation is trivial in our experiments on two tables.
17
Memory Layer-((-1) for Table-1 table annotation (layer(-1) m"⢠row annotation (layer(-1) Memory Layer-(/-1) for Table-K m'⢠row annotation (layer ()
Figure 14: Illustration of Annotator for multiple tables case
China 2008 Beijing 3,500 4,200 30 67,000 Table-2 country continent population country size China Asia 130 960
Figure 15: Multiple tables example in the synthetic QA task (only show one row)
# C.2 Experimental Results
We present preliminarily results for Neural Enquirer-M, which we evaluated on SQL-like Se- lect Where logical forms (like F1, F2 in Table 1). We sampled a dataset of 100K examples, with each example having two tables as in Figure 15. Out of all Select Where queries, roughly half of the queries (denoted as âJoinâ) require joining the two tables to derive answers. We tested on a model with three executors. Table 4 lists the results. The accuracy of join queries is lower than that of non-join queries, which is caused by additional interaction between the two tables involved in answering join queries.
Query Type Non-Join Join Overall Accuracy 99.7% 81.5% 91.3% | 1512.00965#55 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 56 | Query Type Non-Join Join Overall Accuracy 99.7% 81.5% 91.3%
# Table 4: Accuracies of Select Where queries on two tables
We ï¬nd that Neural Enquirer-M is capable of identifying that the country ï¬eld is the foreign key linking the two tables. Figure 16 illustrates the attention weights for a correctly answered join
18
Q9: select country size, where year = 2012 Executor-(1, 1) Executor-(1, 2) Executor-(2, 1) Executor-(2, 2) Executor-(3, 1) Executor-(3, 2)
# Figure 16: Weights visualization of query Q9 | 1512.00965#56 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00965 | 57 | # Figure 16: Weights visualization of query Q9
query Q9. Although the query does not contain any hints for the foreign key (country ï¬eld), Executor- (1, 1) (the executor at Layer-1 on Table-1) operates on an ensemble of embeddings of the country and year ï¬elds, whose outputting row annotations (contain information of both the key country and the value year) are sent to Executor-(2, 2) to compare with the country ï¬eld in Table-2. We posit that the result of comparison is stored in the row annotations of Executor-(2, 2) and subsequently sent to the executors at Layer-3 for computing the answer probability for each entry in the two tables.
19 | 1512.00965#57 | Neural Enquirer: Learning to Query Tables with Natural Language | We proposed Neural Enquirer as a neural network architecture to execute a
natural language (NL) query on a knowledge-base (KB) for answers. Basically,
Neural Enquirer finds the distributed representation of a query and then
executes it on knowledge-base tables to obtain the answer as one of the values
in the tables. Unlike similar efforts in end-to-end training of semantic
parsers, Neural Enquirer is fully "neuralized": it not only gives
distributional representation of the query and the knowledge-base, but also
realizes the execution of compositional queries as a series of differentiable
operations, with intermediate results (consisting of annotations of the tables
at different levels) saved on multiple layers of memory. Neural Enquirer can be
trained with gradient descent, with which not only the parameters of the
controlling components and semantic parsing component, but also the embeddings
of the tables and query words can be learned from scratch. The training can be
done in an end-to-end fashion, but it can take stronger guidance, e.g., the
step-by-step supervision for complicated queries, and benefit from it. Neural
Enquirer is one step towards building neural network systems which seek to
understand language by executing it on real-world. Our experiments show that
Neural Enquirer can learn to execute fairly complicated NL queries on tables
with rich structures. | http://arxiv.org/pdf/1512.00965 | Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao | cs.AI, cs.CL, cs.LG, cs.NE | null | null | cs.AI | 20151203 | 20160121 | [] |
1512.00567 | 1 | # Zbigniew Wojna University College London [email protected]
# Abstract
Convolutional networks are at the core of most state- of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in vari- ous benchmarks. Although increased model size and com- putational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efï¬ciency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are explor- ing ways to scale up networks in ways that aim at utilizing the added computation as efï¬ciently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classiï¬cation challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computa- tional cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error. | 1512.00567#1 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 2 | larly high performance in the 2014 ILSVRC [16] classiï¬ca- tion challenge. One interesting observation was that gains in the classiï¬cation performance tend to transfer to signiï¬- cant quality gains in a wide variety of application domains. This means that architectural improvements in deep con- volutional architecture can be utilized for improving perfor- mance for most other computer vision tasks that are increas- ingly reliant on high quality, learned visual features. Also, improvements in the network quality resulted in new appli- cation domains for convolutional networks in cases where AlexNet features could not compete with hand engineered, crafted solutions, e.g. proposal generation in detection[4].
Although VGGNet [18] has the compelling feature of architectural simplicity, this comes at a high cost: evalu- ating the network requires a lot of computation. On the other hand, the Inception architecture of GoogLeNet [20] was also designed to perform well even under strict con- straints on memory and computational budget. For exam- ple, GoogleNet employed only 5 million parameters, which represented a 12Ã reduction with respect to its predeces- sor AlexNet, which used 60 million parameters. Further- more, VGGNet employed about 3x more parameters than AlexNet.
# 1. Introduction | 1512.00567#2 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 4 | These successes spurred a new line of research that fo- cused on ï¬nding higher performing convolutional neural networks. Starting in 2014, the quality of network architec- tures signiï¬cantly improved by utilizing deeper and wider networks. VGGNet [18] and GoogLeNet [20] yielded simiThe computational cost of Inception is also much lower than VGGNet or its higher performing successors [6]. This has made it feasible to utilize Inception networks in big-data scenarios[17], [13], where huge amount of data needed to be processed at reasonable cost or scenarios where memory or computational capacity is inherently limited, for example in mobile vision settings. It is certainly possible to mitigate parts of these issues by applying specialized solutions to tar- get memory use [2], [15] or by optimizing the execution of certain operations via computational tricks [10]. However, these methods add extra complexity. Furthermore, these methods could be applied to optimize the Inception archi- tecture as well, widening the efï¬ciency gap again.
Still, the complexity of the Inception architecture makes
1 | 1512.00567#4 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 5 | it more difï¬cult to make changes to the network. If the ar- chitecture is scaled up naively, large parts of the computa- tional gains can be immediately lost. Also, [20] does not provide a clear description about the contributing factors that lead to the various design decisions of the GoogLeNet architecture. This makes it much harder to adapt it to new use-cases while maintaining its efï¬ciency. For example, if it is deemed necessary to increase the capacity of some Inception-style model, the simple transformation of just doubling the number of all ï¬lter bank sizes will lead to a 4x increase in both computational cost and number of pa- rameters. This might prove prohibitive or unreasonable in a lot of practical scenarios, especially if the associated gains are modest. In this paper, we start with describing a few general principles and optimization ideas that that proved to be useful for scaling up convolution networks in efï¬cient ways. Although our principles are not limited to Inception- type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is ï¬exible enough to | 1512.00567#5 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 6 | limited to Inception- type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is ï¬exible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduc- tion and parallel structures of the Inception modules which allows for mitigating the impact of structural changes on nearby components. Still, one needs to be cautious about doing so, as some guiding principles should be observed to maintain high quality of the models. | 1512.00567#6 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 7 | # 2. General Design Principles
Here we will describe a few design principles based on large-scale experimentation with various architectural choices with convolutional networks. At this point, the util- ity of the principles below are speculative and additional future experimental evidence will be necessary to assess their accuracy and domain of validity. Still, grave devia- tions from these principles tended to result in deterioration in the quality of the networks and ï¬xing situations where those deviations were detected resulted in improved archi- tectures in general. | 1512.00567#7 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 8 | 1. Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be repre- sented by an acyclic graph from the input layer(s) to the classiï¬er or regressor. This deï¬nes a clear direction for the information ï¬ow. For any cut separating the in- puts from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the in- puts to the outputs before reaching the ï¬nal represen- tation used for the task at hand. Theoretically, infor- mation content can not be assessed merely by the di- mensionality of the representation as it discards impor- tant factors like correlation structure; the dimensionality merely provides a rough estimate of information content.
2. Higher dimensional representations are easier to pro- cess locally within a network. Increasing the activa- tions per tile in a convolutional network allows for more disentangled features. The resulting networks will train faster. | 1512.00567#8 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 9 | 3. Spatial aggregation can be done over lower dimen- sional embeddings without much or any loss in rep- resentational power. For example, before performing a more spread out (e.g. 3 Ã 3) convolution, one can re- duce the dimension of the input representation before the spatial aggregation without expecting serious ad- verse effects. We hypothesize that the reason for that is the strong correlation between adjacent unit results in much less loss of information during dimension re- duction, if the outputs are used in a spatial aggrega- tion context. Given that these signals should be easily compressible, the dimension reduction even promotes faster learning.
4. Balance the width and depth of the network. Optimal performance of the network can be reached by balanc- ing the number of ï¬lters per stage and the depth of the network. Increasing both the width and the depth of the network can contribute to higher quality net- works. However, the optimal improvement for a con- stant amount of computation can be reached if both are increased in parallel. The computational budget should therefore be distributed in a balanced way between the depth and width of the network.
Although these principles might make sense, it is not straightforward to use them to improve the quality of net- works out of box. The idea is to use them judiciously in ambiguous situations only.
# 3. Factorizing Convolutions with Large Filter Size | 1512.00567#9 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 10 | # 3. Factorizing Convolutions with Large Filter Size
Much of the original gains of the GoogLeNet net- work [20] arise from a very generous use of dimension re- duction. This can be viewed as a special case of factorizing convolutions in a computationally efï¬cient manner. Con- sider for example the case of a 1 à 1 convolutional layer followed by a 3 à 3 convolutional layer. In a vision net- work, it is expected that the outputs of near-by activations are highly correlated. Therefore, we can expect that their activations can be reduced before aggregation and that this should result in similarly expressive local representations.
Here we explore other ways of factorizing convolutions in various settings, especially in order to increase the com- putational efï¬ciency of the solution. Since Inception net- works are fully convolutional, each weight corresponds to | 1512.00567#10 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 11 | Figure 1. Mini-network replacing the 5 Ã 5 convolutions.
one multiplication per activation. Therefore, any reduction in computational cost results in reduced number of param- eters. This means that with suitable factorization, we can end up with more disentangled parameters and therefore with faster training. Also, we can use the computational and memory savings to increase the ï¬lter-bank sizes of our network while maintaining our ability to train each model replica on a single computer.
# 3.1. Factorization into smaller convolutions | 1512.00567#11 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 12 | Convolutions with larger spatial ï¬lters (e.g. 5 à 5 or 7 à 7) tend to be disproportionally expensive in terms of computation. For example, a 5 à 5 convolution with n ï¬l- ters over a grid with m ï¬lters is 25/9 = 2.78 times more computationally expensive than a 3 à 3 convolution with the same number of ï¬lters. Of course, a 5 à 5 ï¬lter can cap- ture dependencies between signals between activations of units further away in the earlier layers, so a reduction of the geometric size of the ï¬lters comes at a large cost of expres- siveness. However, we can ask whether a 5 à 5 convolution could be replaced by a multi-layer network with less pa- rameters with the same input size and output depth. If we zoom into the computation graph of the 5 à 5 convolution, we see that each output looks like a small fully-connected network sliding over 5 à 5 tiles over its input (see Figure 1). Since we are constructing a vision network, it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi- tecture: the ï¬rst layer | 1512.00567#12 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 13 | it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi- tecture: the ï¬rst layer is a 3 à 3 convolution, the second is a fully connected layer on top of the 3 à 3 output grid of the ï¬rst layer (see Figure 1). Sliding this small network over the input activation grid boils down to replacing the 5 à 5 convolution with two layers of 3 à 3 convolution (compare Figure 4 with 5). | 1512.00567#13 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 14 | This setup clearly reduces the parameter count by shar- ing the weights between adjacent tiles. To analyze the ex
Figure 2. One of several control experiments between two Incep- tion models, one of them uses factorization into linear + ReLU layers, the other uses two ReLU layers. After 3.86 million opera- tions, the former settles at 76.2%, while the latter reaches 77.2% top-1 Accuracy on the validation set. | 1512.00567#14 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 15 | pected computational cost savings, we will make a few sim- plifying assumptions that apply for the typical situations: We can assume that n = αm, that is that we want to change the number of activations/unit by a constant alpha factor. Since the 5 à 5 convolution is aggregating, α is typically slightly larger than one (around 1.5 in the case of GoogLeNet). Having a two layer replacement for the 5 à 5 layer, it seems reasonable to reach this expansion in α in both two steps: increasing the number of ï¬lters by steps. In order to simplify our estimate by choosing α = 1 (no expansion), If we would naivly slide a network without reusing the computation between neighboring grid tiles, we would increase the computational cost. sliding this network can be represented by two 3 à 3 convolutional layers which reuses the activations between adjacent tiles. This way, we end up with a net 9+9 25 à reduction of computation, resulting in a relative gain of 28% by this factorization. The exact same saving holds for the parameter count as each parame- ter is used exactly once in the computation of the activation of each unit. Still, this setup raises two general | 1512.00567#15 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 16 | same saving holds for the parameter count as each parame- ter is used exactly once in the computation of the activation of each unit. Still, this setup raises two general questions: Does this replacement result in any loss of expressiveness? If our main goal is to factorize the linear part of the compu- tation, would it not suggest to keep linear activations in the ï¬rst layer? We have ran several control experiments (for ex- ample see ï¬gure 2) and using linear activation was always inferior to using rectiï¬ed linear units in all stages of the fac- torization. We attribute this gain to the enhanced space of variations that the network can learn especially if we batch- normalize [7] the output activations. One can see similar effects when using linear activations for the dimension re- duction components. | 1512.00567#16 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 18 | Figure 3. Mini-network replacing the 3 Ã 3 convolutions. The lower layer of this network consists of a 3 Ã 1 convolution with 3 output units.
Filter Concat
Figure 4. Original Inception module as described in [20].
layers. Still we can ask the question whether one should factorize them into smaller, for example 2 à 2 convolutions. However, it turns out that one can do even better than 2 à 2 by using asymmetric convolutions, e.g. n à 1. For example using a 3 à 1 convolution followed by a 1 à 3 convolution is equivalent to sliding a two layer network with the same receptive ï¬eld as in a 3 à 3 convolution (see ï¬gure 3). Still the two-layer solution is 33% cheaper for the same number of output ï¬lters, if the number of input and output ï¬lters is equal. By comparison, factorizing a 3 à 3 convolution into a two 2 à 2 convolution represents only a 11% saving of computation.
In theory, we could go even further and argue that one can replace any n à n convolution by a 1 à n convoluFilter Concat
Figure 5. Inception modules where each 5 Ã 5 convolution is re- placed by two 3 Ã 3 convolution, as suggested by principle 3 of Section 2. | 1512.00567#18 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 19 | Figure 5. Inception modules where each 5 Ã 5 convolution is re- placed by two 3 Ã 3 convolution, as suggested by principle 3 of Section 2.
tion followed by a n à 1 convolution and the computational cost saving increases dramatically as n grows (see ï¬gure 6). In practice, we have found that employing this factorization does not work well on early layers, but it gives very good re- sults on medium grid-sizes (On m à m feature maps, where m ranges between 12 and 20). On that level, very good re- sults can be achieved by using 1 à 7 convolutions followed by 7 à 1 convolutions.
# 4. Utility of Auxiliary Classiï¬ers | 1512.00567#19 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 20 | # 4. Utility of Auxiliary Classiï¬ers
[20] has introduced the notion of auxiliary classiï¬ers to improve the convergence of very deep networks. The origi- nal motivation was to push useful gradients to the lower lay- ers to make them immediately useful and improve the con- vergence during training by combating the vanishing gra- dient problem in very deep networks. Also Lee et al[11] argues that auxiliary classiï¬ers promote more stable learn- Interestingly, we found that ing and better convergence. auxiliary classiï¬ers did not result in improved convergence early in the training: the training progression of network with and without side head looks virtually identical before both models reach high accuracy. Near the end of training, the network with the auxiliary branches starts to overtake the accuracy of the network without any auxiliary branch and reaches a slightly higher plateau.
Also [20] used two side-heads at different stages in the network. The removal of the lower auxiliary branch did not have any adverse effect on the ï¬nal quality of the network. Together with the earlier observation in the previous paraFilter Concat
Figure 6. Inception modules after the factorization of the n à n convolutions. In our proposed architecture, we chose n = 7 for the 17 à 17 grid. (The ï¬lter sizes are picked using principle 3) . | 1512.00567#20 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 21 | graph, this means that original the hypothesis of [20] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary clas- siï¬ers act as regularizer. This is supported by the fact that the main classiï¬er of the network performs better if the side branch is batch-normalized [7] or has a dropout layer. This also gives a weak supporting evidence for the conjecture that batch normalization acts as a regularizer.
# 5. Efï¬cient Grid Size Reduction | 1512.00567#21 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 22 | # 5. Efï¬cient Grid Size Reduction
Traditionally, convolutional networks used some pooling operation to decrease the grid size of the feature maps. In order to avoid a representational bottleneck, before apply- ing maximum or average pooling the activation dimension of the network ï¬lters is expanded. For example, starting a d à d grid with k ï¬lters, if we would like to arrive at a d 2 à d 2 grid with 2k ï¬lters, we ï¬rst need to compute a stride-1 con- volution with 2k ï¬lters and then apply an additional pooling step. This means that the overall computational cost is dom- inated by the expensive convolution on the larger grid using 2d2k2 operations. One possibility would be to switch to pooling with convolution and therefore resulting in 2( d 2 )2k2
Filter Concat
Figure 7. Inception modules with expanded the ï¬lter bank outputs. This architecture is used on the coarsest (8 à 8) grids to promote high dimensional representations, as suggested by principle 2 of Section 2. We are using this solution only on the coarsest grid, since that is the place where producing high dimensional sparse representation is the most critical as the ratio of local processing (by 1 à 1 convolutions) is increased compared to the spatial ag- gregation. | 1512.00567#22 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 23 | tx1x1024 [Fully connected 8x8x1280 5x5x128 I 1x1 Convolution Inception 5x5x768 5x5 Average pooling with stride 3 17x17x768
Figure 8. Auxiliary classiï¬er on top of the last 17Ã17 layer. Batch normalization[7] of the layers in the side head results in a 0.4% absolute gain in top-1 accuracy. The lower axis shows the number of itertions performed, each with batch size 32.
reducing the computational cost by a quarter. However, this creates a representational bottlenecks as the overall dimen- sionality of the representation drops to ( d 2 )2k resulting in less expressive networks (see Figure 9). Instead of doing so, we suggest another variant the reduces the computational cost even further while removing the representational bot- tleneck. (see Figure 10). We can use two parallel stride 2 blocks: P and C. P is a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the ï¬lter banks of which are concatenated as in ï¬gure 10.
17x17x640 17x17x640 | 17x17x320 F rig 35x35x320 Pooling 35x35x640 35x35x320 | 1512.00567#23 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 24 | 17x17x640 17x17x640 | 17x17x320 F rig 35x35x320 Pooling 35x35x640 35x35x320
Figure 9. Two alternative ways of reducing the grid size. The so- lution on the left violates the principle 1 of not introducing an rep- resentational bottleneck from Section 2. The version on the right is 3 times more expensive computationally.
Filter Concat 3x3 stride 2 17x17x640 i = 3x3 17x17x320 17x17x320 stride 1 i I con oo Pool 1x1 1x1 stride 2 35x35x320 Base
Figure 10. Inception module that reduces the grid-size while ex- pands the ï¬lter banks. It is both cheap and avoids the representa- tional bottleneck as is suggested by principle 1. The diagram on the right represents the same solution but from the perspective of grid sizes rather than the operations.
# 6. Inception-v2 | 1512.00567#24 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
1512.00567 | 25 | # 6. Inception-v2
Here we are connecting the dots from above and pro- pose a new architecture with improved performance on the ILSVRC 2012 classiï¬cation benchmark. The layout of our network is given in table 1. Note that we have factorized the traditional 7 à 7 convolution into three 3 à 3 convolu- tions based on the same ideas as described in section 3.1. For the Inception part of the network, we have 3 traditional inception modules at the 35 à 35 with 288 ï¬lters each. This is reduced to a 17 à 17 grid with 768 ï¬lters using the grid reduction technique described in section 5. This is is fol- lowed by 5 instances of the factorized inception modules as depicted in ï¬gure 5. This is reduced to a 8 à 8 à 1280 grid with the grid reduction technique depicted in ï¬gure 10. At the coarsest 8 à 8 level, we have two Inception modules as depicted in ï¬gure 6, with a concatenated output ï¬lter bank size of 2048 for each tile. The detailed structure of the net- work, including the sizes of ï¬lter banks inside the Inception modules, is given in the supplementary material, given in the model.txt that is in the tar-ï¬le of this submission. | 1512.00567#25 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | [
{
"id": "1502.01852"
},
{
"id": "1503.03832"
},
{
"id": "1509.09308"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.