doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1602.01783 | 61 | S000 Slow car, no bots S000 Slow car, bots 4000 4000 3000 3000 $2000 $2000 1000 Async 1-step Q 1000 Async 1-step Q Async SARSA Async Async actor-critic sync actor-critic â â Async SARSA â Asyne n-step Q = Human tester Human tester 1000 -1000 0 10 20 30 40 0 10 20 30 40 Training time (hours) Training time (hours) 6000 Fast car, no bots 6000 Fast car, bots 5000 5000 4000 4000 y 3000 y 3000 * 2000 * 2000 Async L-step Q Async L-step Q 1000 1000 âAsync SARSA Async n-step Q Async actor-critic Human tester âAsync SARSA. Async n-step Q Async actor-critic Human tester 10 20 30 40 Training time (hours) 10 20 Training time (hours) 40 | 1602.01783#61 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 62 | Figure S6. Comparison of algorithms on the TORCS car racing simulator. Four different conï¬gurations of car speed and opponent presence or absence are shown. In each plot, all four algorithms (one-step Q, one-step Sarsa, n-step Q and Advantage Actor-Critic) are compared on score vs training time in wall clock hours. Multi-step algorithms achieve better policies much faster than one-step algorithms on all four levels. The curves show averages over the 5 best runs from 50 experiments with learning rates sampled from LogU nif orm(10â4, 10â2) and all other hyperparameters ï¬xed.
Asynchronous Methods for Deep Reinforcement Learning | 1602.01783#62 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 63 | Figure S7. Performance for the Mujoco continuous action domains. Scatter plot of the best score obtained against learning rates sampled from LogU nif orm(10â5, 10â1). For nearly all of the tasks there is a wide range of learning rates that lead to good performance on the task.
Asynchronous Methods for Deep Reinforcement Learning
Figure S8. Score per episode vs wall-clock time plots for the Mujoco domains. Each plot shows error bars for the top 5 experiments.
Figure S9. Data efï¬ciency comparison of different numbers of actor-learners one-step Sarsa on ï¬ve Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows increased data efï¬ciency with increased numbers of parallel workers.
Asynchronous Methods for Deep Reinforcement Learning | 1602.01783#63 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 64 | Figure S10. Training speed comparison of different numbers of actor-learners for all one-step Sarsa on ï¬ve Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows signiï¬cant speedups from using greater numbers of parallel actor-learners.
1000 1 step 0.Beamide wo step 9, Bretaut so step 0. Pang sovo 1atep 0, omer
Figure S11. Scatter plots of scores obtained by one-step Q, one-step Sarsa, and n-step Q on ï¬ve games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. All algorithms exhibit some level of robustness to the choice of learning rate.
Asynchronous Methods for Deep Reinforcement Learning | 1602.01783#64 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 65 | DQN 570.2 133.4 3332.3 124.5 697.1 76108.0 176.3 17560.0 8672.4 41.2 25.8 303.9 3773.1 3046.0 50992.0 12835.2 -21.6 475.6 -2.3 25.8 157.4 2731.8 216.5 12952.5 -3.8 348.5 2696.0 3864.0 11875.0 50.0 763.5 5439.9 16.2 298.2 4589.8 4065.3 9264.0 58.5 2793.9 1449.7 34081.0 -2.3 5640.0 32.4 3311.3 54.0 20228.1 246.0 Gorila 813.5 189.2 1195.8 3324.7 933.6 629166.5 399.4 19938.0 3822.1 54.0 74.2 313.0 6296.9 3191.8 65451.0 14880.1 -11.3 71.0 4.6 10.2 426.6 4373.0 538.4 8963.4 -1.7 444.0 1431.0 6363.1 20620.0 84.0 1263.0 9238.5 16.7 | 1602.01783#65 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 67 | Game Alien Amidar Assault Asterix Asteroids Atlantis Bank Heist Battle Zone Beam Rider Berzerk Bowling Boxing Breakout Centipede Chopper Comman Crazy Climber Defender Demon Attack Double Dunk Enduro Fishing Derby Freeway Frostbite Gopher Gravitar H.E.R.O. Ice Hockey James Bond Kangaroo Krull Kung-Fu Master Montezumaâs Revenge Ms. Pacman Name This Game Phoenix Pit Fall Pong Private Eye Q*Bert River Raid Road Runner Robotank Seaquest Skiing Solaris Space Invaders Star Gunner Surround Tennis Time Pilot Tutankham Up and Down Venture Video Pinball Wizard of Wor Yars Revenge Zaxxon | 1602.01783#67 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 70 | Prioritized 900.5 218.4 7748.5 31907.5 1654.0 593642.0 816.8 29100.0 26172.7 1165.6 65.8 68.6 371.6 3421.9 6604.0 131086.0 21093.5 73185.8 2.7 1884.4 9.2 27.9 2930.2 57783.8 218.0 20506.4 -1.0 3511.5 10241.0 7406.5 31244.0 13.0 1824.6 11836.1 27430.1 -14.8 18.9 179.0 11277.0 18184.4 56990.0 55.4 39096.7 -10852.8 2238.2 9063.0 51959.0 -0.9 -2.0 7448.0 33.6 29443.7 244.0 374886.9 7451.0 5965.1 9501.0 | 1602.01783#70 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01783 | 73 | A3C LSTM 945.3 173.0 14497.9 17244.5 5093.1 875822.0 932.8 20760.0 24622.2 862.2 41.8 37.3 766.8 1997.0 10150.0 138518.0 233021.5 115201.9 0.1 -82.5 22.6 0.1 197.6 17106.8 320.0 28889.5 -1.7 613.0 125.0 5911.4 40835.0 41.0 850.7 12093.7 74786.7 -135.7 10.7 421.1 21307.5 6591.9 73949.0 2.6 1326.1 -14863.8 1936.4 23846.0 164766.0 -8.3 -6.4 27202.0 144.2 105728.7 25.0 470310.5 18082.0 5615.5 23519.0
831.0
6159.4 | 1602.01783#73 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | [
{
"id": "1509.02971"
},
{
"id": "1509.06461"
},
{
"id": "1511.05952"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1602.01137 | 0 | 6 1 0 2
b e F 2 ] R I . s c [
1 v 7 3 1 1 0 . 2 0 6 1 : v i X r a
# A Dual Embedding Space Model for Document Ranking
Bhaskar Mitra Microsoft Cambridge, UK [email protected]
Eric Nalisnick University of California Irvine, USA [email protected]
Nick Craswell, Rich Caruana Microsoft Redmond, USA nickcr, [email protected]
ABSTRACT A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difï¬cult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs. | 1602.01137#0 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 1 | We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re- rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we ï¬nd the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features. Categories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.3 Information Search and Retrieval Keywords: Document ranking; Word embeddings; Word2vec | 1602.01137#1 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 2 | Figure 1: A two dimensional PCA projection of the 200- dimensional embeddings. Relevant documents are yellow, irrel- evant documents are grey, and the query is blue. To visualize the results of multiple queries at once, before dimensionality reduction we centre query vectors at the origin and represent documents as the difference between the document vector and its query vector. (a) uses IN word vector centroids to represent both the query and the documents. (b) uses IN for the queries and OUT for the documents, and seems to have a higher density of relevant documents near the query.
# INTRODUCTION
Identifying relevant documents for a given query is a core chal- lenge for Web search. For large-scale search engines, it is possible to identify a very small set of pages that can answer a good proportion of queries [2]. For such popular pages, clicks and hyperlinks may provide sufï¬cient ranking evidence and it may not be important to match the query against the body text. However, in many Web search scenarios such query-content matching is crucial. If new content is available, the new and updated documents may not have click evidence or may have evidence that is out of date. For new or tail queries, there may be no memorized connections between the queries and the documents. Furthermore, many search engines and apps have a relatively smaller number of users, which limits their | 1602.01137#2 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 3 | This paper is an extended evaluation and analysis of the model proposed by Nalisnick et al. [32] to appear in WWWâ16, April 11 - 15, 2016, Montreal, Canada. Copyright 2016 by the author(s).
ability to answer queries based on memorized clicks. There may even be insufï¬cient behaviour data to learn a click-based embedding [18] or a translation model [10, 19]. In these cases it is crucial to model the relationship between the query and the document content, without click data.
When considering the relevance of document body text to a query, the traditional approach is to count repetitions of query terms in the document. Different transformation and weighting schemes for those counts lead to a variety of possible TF-IDF ranking features. One theoretical basis for such features is the probabilistic model of information retrieval, which has yielded the very successful TF-IDF formulation BM25[35]. As noted by Robertson [34], the probabilis- tic approach can be restricted to consider only the original query terms or it can automatically identify additional terms that are cor- related with relevance. However, the basic commonly-used form | 1602.01137#3 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 4 | Table 1: The nearest neighbours for the words "yale", "seahawks" and "eminem" according to the cosine similarity based on the IN-IN, OUT-OUT and IN-OUT vector comparisons for the different words in the vocabulary. These examples show that IN-IN and OUT-OUT cosine similarities are high for words that are similar by function or type (typical), and the IN-OUT cosine similarities are high between words that often co-occur in the same query or document (topical). The word2vec model used here was trained on a query corpus with a vocabulary of 2,748,230 words.
IN-IN yale harvard nyu cornell tulane tufts yale OUT-OUT yale uconn harvard tulane nyu tufts IN-OUT yale faculty alumni orientation haven graduate IN-IN seahawks 49ers broncos packers nï¬ steelers seahawks OUT-OUT seahawks broncos 49ers nï¬ packers steelers IN-OUT seahawks highlights jerseys tshirts seattle hats IN-IN eminem rihanna ludacris kanye beyonce 2pac eminem OUT-OUT eminem rihanna dre kanye beyonce tupac IN-OUT eminem rap featuring tracklist diss performs | 1602.01137#4 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 5 | of BM25 considers query terms only, under the assumption that non-query terms are less useful for document ranking.
In the probabilistic approach, the 2-Poisson model forms the ba- sis for counting term frequency [6, 15, 36]. The stated goal is to distinguish between a document that is about a term and a document that merely mentions that term. These two types of documents have term frequencies from two different Poisson distributions, such that documents about the term tend to have higher term frequency than those that merely mention it. This explanation for the relation- ship between term frequency and aboutness is the basis for the TF function in BM25 [36]. | 1602.01137#5 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 6 | The new approach in this paper uses word occurrences as ev- idence of aboutness, as in the probabilistic approach. However, instead of considering term repetition as evidence of aboutness it considers the relationship between the query terms and all the terms in the document. For example, given a query term âyaleâ, in addi- tion to considering the number of times Yale is mentioned in the document, we look at whether related terms occur in the document, such as âfacultyâ and âalumniâ. Similarly, in a document about the Seahawks sports team one may expect to see the terms âhighlightsâ and âjerseysâ. The occurrence of these related terms in sufï¬cient numbers is a way to distinguish between documents that merely mention Yale or Seahawks and the documents that are about the university or about the sports team.
⢠We propose a document ranking feature based on comparing all the query words with all the document words, which is equivalent to comparing each query word to a centroid of the document word embeddings.
⢠We analyse the positive aspects of the new feature, prefer- ring documents that contain many words related to the query words, but also note the potential of the feature to have false positive matches. | 1602.01137#6 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 7 | ⢠We analyse the positive aspects of the new feature, prefer- ring documents that contain many words related to the query words, but also note the potential of the feature to have false positive matches.
⢠We empirically compare the new approach to a single em- bedding and the traditional word counting features. The new approach works well on its own in a telescoping setting, re- ranking the top documents returned by a commercial Web search engine, and in combination with word counting for a more general document retrieval task.
2. DISTRIBUTIONAL SEMANTICS FOR IR In this section we ï¬rst introduce the Continuous Bag-of-Words (CBOW) model made popular by the software Word2Vec [28, 29]. Then, inspired by our ï¬ndings that distinctly different topic-based relationships can be found by using both the input and the output embeddings jointly â the latter of which is usually discarded after training â we propose the Dual Embedding Space Model (DESM) for document ranking. | 1602.01137#7 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 8 | With this motivation, in Section 2 we describe how the input and the output embedding spaces learned by a word2vec model may be jointly particularly attractive for modelling the aboutness aspect of document ranking. Table 1 gives some anecdotal evidence of why this is true. If we look in the neighbourhood of the IN vector of the word âyaleâ then the other IN vectors that are close correspond to words that are functionally similar or of the same type, e.g., âharvardâ and ânyuâ. A similar pattern emerges if we look at the OUT vectors in the neighbourhood of the OUT vector of âyaleâ. On the other hand, if we look at the OUT vectors that are closest to the IN vector of âyaleâ we ï¬nd words like âfacultyâ and âalumniâ. We use this property of the IN-OUT embeddings to propose a novel Dual Embedding Space Model (DESM) for document ranking. Figure 1 further illustrates how in this Dual Embedding Space model, using the IN embeddings for the query words and the OUT embeddings for the document words we get a much more useful similarity deï¬nition between the query and the relevant document centroids.
The main contributions of this paper are,
# 2.1 Continuous Bag-of-Words | 1602.01137#8 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 9 | The main contributions of this paper are,
# 2.1 Continuous Bag-of-Words
While many word embedding models have been proposed re- cently, the Continuous Bag-of-Words (CBOW) and the Skip-Gram (SG) architectures proposed by Mikolov et al. [29] are arguably the most popular (perhaps due to the popularity of the software Word2Vec1, which implements both). Although here we will con- centrate exclusively on the CBOW model, our proposed IR ranking methodology is just as applicable to vectors produced by SG, as both models produce qualitatively and quantitatively similar embeddings. The CBOW model learns a wordâs embedding via maximizing the log conditional probability of the word given the context words occurring within a ï¬xed-sized window around that word. That is, the words in the context window serve as input, and from them, the model attempts to predict the center (missing) word. For a formal deï¬nition, let ck â Rd be a d-dimensional, real-valued vector representing the kth context word ck appearing in a K â 1- sized window around an instance of word wi, which is represented by a vector wi â Rd. The model âpredictsâ word wi by adapting its representation vector such that it has a large inner-product with | 1602.01137#9 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 10 | ⢠A novel Dual Embedding Space Model, with one embedding for query words and a separate embedding for document words, learned jointly based on an unlabelled text corpus.
1https://code.google.com/p/word2vec/
Input Layer Output Layer [e) [e) e) Hidden Layer Oo [e) [e) e) [e) [e) [e) [e) [e) [e) oO e) [e)
Figure 2: The architecture of a word2vec (CBOW) model con- sidering a single context word. WIN and WOU T are the two weight matrices learnt during training and corresponds to the IN and the OUT word embedding spaces of the model.
the mean of the context word vectors. Training CBOW requires minimization of the following objective
IDI Losow = S â log p(wilCx) - e) |D| & = 2018 ew ye eck ,
where
aA 1 CK= K-1 S Ck (2) i-K<SkSi+K Hi
and D represents the training corpus. Notice that the probability is normalized by summing over all the vocabulary, which is quite costly when training on web-scale data. To make CBOW scalable, Mikolov et al. [29] proposed the following slightly altered negative sampling objective: | 1602.01137#10 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 11 | N âlog p(wi|Cx) © âlogo(Cxwi) -S log o(âCxWn) (3) n=1
where Ï is the Sigmoid function and N is the number of negative sample words drawn either from the uniform or empirical distribu- tion over the vocabulary. All our experiments were performed with the negative sampling objective.
A crucial detail often overlooked when using Word2Vec is that there are two different sets of vectors (represented above by c and w respectively and henceforth referred to as the IN and OUT em- bedding spaces), which correspond to the WIN and WOU T weight matrices in Figure 2. By default, Word2Vec discards WOU T at the end of training and outputs only WIN . Subsequent tasks deter- mine word-to-word semantic relatedness by computing the cosine similarity:
; cre; sim(ci, cj) = cos(ci, ej) = Tesliie I (4) aI Cj
# 2.2 Dual Embedding Space Model | 1602.01137#11 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 12 | ; cre; sim(ci, cj) = cos(ci, ej) = Tesliie I (4) aI Cj
# 2.2 Dual Embedding Space Model
A key challenge for term-matching based retrieval is to distin- guish whether a document merely references a term or is about that entity. See Figure 3 for a concrete example of two passages that contain the term "Albuquerque" an equal number of times although only one of the passages is about that entity. The presence of the words like "population" and "metropolitan" indicate that the left passage is about Albuquerque, whereas the passage on the right just mentions it. However, these passages would be indistinguishable under term counting. The semantic similarity of non-matched terms (i.e. the words a TF feature would overlook) are crucial for inferring a documentâs topic of focusâits aboutness. | 1602.01137#12 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 13 | Due to its ability to capture word co-occurrence (i.e. perform missing word prediction), CBOW is a natural ï¬t for modelling the aboutness of a document. The learnt embedding spaces contain use- ful knowledge about the distributional properties of words, allowing, in the case of Figure 3, an IR system to recognize the city-related terms in the left document. With this motivation, we deï¬ne a simple yet, as we will demonstrate, effective ranking function we call the Dual Embedding Space Model:
DESM: D) (2.2)= 9 » jallDi Ta TT ©
where
mL I 6 I (6) â¬D
dj âD Here D is the centroid of all the normalized vectors for the words in the document serving as a single embedding for the whole docu- ment. In this formulation of the DESM, the document embeddings can be pre-computed, and at the time of ranking, we only need to sum the score contributions across the query terms. We expect that the ability to pre-compute a single document embedding is a very useful property when considering runtime efï¬ciency. | 1602.01137#13 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 14 | IN-IN vs. IN-OUT. Hill et al. [16] noted, "Not all neural em- beddings are born equal". As previously mentioned, the CBOW (and SG) model contains two separate embedding spaces (IN and OUT) whose interactions capture additional distributional seman- tics of words that are not observable by considering any of the two embeddings spaces in isolation. Table 1 illustrates clearly how the CBOW model "pushes" the IN vectors of words closer to the OUT vectors of other words that they commonly co-occur with. In doing so, words that appear in similar contexts get pushed closer to each other within the IN embedding space (and also within the OUT embedding space). Therefore the IN-IN (or the OUT-OUT) cosine similarities are higher for words that are typically (by type or by function) similar, whereas the IN-OUT cosine similarities are higher for words that co-occur often in the training corpus (topically simi- lar). This gives us at least two variants of the DESM, corresponding to retrieval in the IN-OUT space or the IN-IN space2.
T qiniDour =a val[Doorl fcg llaryallllDour| DESM,n-our(Q, D) | 1602.01137#14 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 16 | 2It and DESMOU T âIN , but based on limited experimentation we expect them to behave similar to DESMIN âIN and DESMIN âOU T , respectively.
Albuquerque is the most populous city in the U.S. state of New Mexico. The high-altitude city serves as the county seat of Bernalillo County, and it is situated in the central part of the state, straddling the Rio Grande. The city population is 557,169 as of the July 1, 2014, population estimate from the United States Census Bureau, and ranks as the 32nd-largest city in the U.S. The Metropolitan Statistical Area (or MSA) has a population of 902,797 according to the United States Census Bureau's most recently available estimate for July 1, 2013. (a) Allen suggested that they could program a BASIC interpreter for the device; after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didn't actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, the interpreter worked flawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico in March 1975; MITS agreed to distribute it, marketing it as Altair BASIC. (b) | 1602.01137#16 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 17 | Figure 3: Two different passages from Wikipedia that mentions "Albuquerque" (highlighted in orange) exactly once. Highlighted in green are all the words that have an IN-OUT similarity score with the word "Albuquerque" above a ï¬xed threshold (we choose -0.03 for this visualization) and can be considered as providing supporting evidence that (a) is about Albuquerque, whereas (b) happens to only mention the city.
In Section 4, we show that the DESMIN âOU T is a better indi- cation of aboutness than BM25, because of its knowledge of the word distributional properties, and DESMIN âIN , since topical similarity is a better indicator of aboutness than typical similarity.
Modelling document aboutness. We perform a simple word perturbation analysis to illustrate how the DESM can collect evi- dence on document aboutness from both matched and non-matched terms in the document. In Table 2, we consider ï¬ve small passages of text. The ï¬rst three passages are about Cambridge, Oxford and giraffes respectively. The next two passages are generated by re- placing the word "giraffe" by the word "Cambridge" in the passage about giraffes, and vice versa. | 1602.01137#17 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 18 | We compute the DESMIN âOU T and the DESMIN âIN scores along with the term frequencies for each of these passages for the query term "cambridge". As expected, all three models score the passage about Cambridge highly. However, unlike the term fre- quency feature, the DESM seem robust towards keyword stufï¬ng3, at least in this speciï¬c example where we replace the word "giraffe" with "cambridge" in the passage about giraffes, but the DESMs still score the passage relatively low. This is exactly the kind of evidence that we expect the DESM to capture that may not be possible by simple term counting.
focusing at ranking for top positions is in fact quite common and has been used by many recent studies (e.g., [10, 18]).
Dot product vs. cosine similarity. In the DESM formulation (Equation 5) we compute the cosine similaritiy between every query word and the normalized document centroid. The use of cosine similarity (as opposed to, say, dot-product) is motivated by several factors. Firstly, much of the existing literature[28, 29] on CBOW and SG uses cosine similarity and normalized unit vectors (for per- forming vector algebra for word analogies). As the cosine similarity has been shown to perform well in practice in these embedding spaces we adopt the same strategy here. | 1602.01137#18 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 19 | A secondary justiï¬cation can be drawn based on the observa- tions made by Wilson and Schakel [48] that the length of the non- normalized word vectors has a direct relation to the frequency of the word. In information retrieval (IR), it is well known that frequently occurring words are ineffective features for distinguishing relevant documents from irrelevant ones. The inverse-document frequency weighting is often used in IR to capture this effect. By normalizing the word vectors in the document before computing the document centroids, we are counteracting the extra inï¬uence frequent words would have on the sum. | 1602.01137#19 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 20 | On the other hand, both the DESMs score the passage about Oxford very highly. This is expected because both these passages contain many words that are likely to co-occur with the word "cam- bridge" in the training corpus. This implies that the DESM features are very susceptible to false positive matches and can only be used either in conjunction with other document ranking features, such as TF-IDF, or for re-ranking a smaller set of candidate documents already deemed at least somewhat relevant. This is similar to the tele- scoping evaluation setup described by Matveeva et al. [27], where multiple nested rankers are used to achieve better retrieval perfor- mance over a single ranker. At each stage of telescoping, a ranker is used to reduce the set of candidate documents that is passed on to the next. Improved performance is possible because the ranker that sees only top-scoring documents can specialize in handling such documents, for example by using different feature weights. In our experiments, we will see the DESM to be a poor standalone ranking signal on a larger set of documents, but performs signiï¬cantly better against the BM25 and the LSA baselines once we reach a small high-quality candidate document set. This evaluation strategy of | 1602.01137#20 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 21 | Training corpus. Our CBOW model is trained on a query cor- pus4 consisting of 618,644,170 queries and a vocabulary size of 2,748,230 words. The queries are sampled from Bingâs large scale search logs from the period of August 19, 2014 to August 25, 2014. We repeat all our experiments using another CBOW model trained on a corpus of document body text with 341,787,174 distinct sen- tences sampled from the Bing search index and a corresponding vocabulary size of 5,108,278 words. Empirical results on the perfor- mance of both the models are presented in Section 4.
Out-of-vocabulary (OOV) words. One of the challenges of the embedding models is that they can only be applied to a ï¬xed size vocabulary. It is possible to explore different strategies to deal with out-of-vocab (OOV) words in the Equation 5 5. But we leave this for future investigation and instead, in this paper, all the OOV words are ignored for computing the DESM score, but not for computing the TF-IDF feature, a potential advantage for the latter.
3https://en.wikipedia.org/wiki/Keyword_ stuffing | 1602.01137#21 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 22 | 3https://en.wikipedia.org/wiki/Keyword_ stuffing
4We provide the IN and OUT word embeddings trained using word2vec on the Bing query corpus at http://research. microsoft.com/projects/DESM. 5In machine translation there are examples of interesting strategies to handle out-of-vocabulary words (e.g., [25])
Table 2: A word perturbation analysis to show how the DESM collects evidence on the aboutness of a document. The DESM models are more robust irrelevant terms. For example, when the word "giraffe" is replaced by the word "cambridge", the passage on giraffes is still scored low by the DESM for the query "cambridge" because it ï¬nds low supporting evidence from the other words in the passage. However, the DESM confuses the passage about Oxford to be relevant for the query "cambridge" because it detects a high number of similar words in the passage that frequently co-occur with the word "Cambridge". | 1602.01137#22 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 23 | Query: "cambridge" Passage type Passage about Cambridge Passage about Oxford Passage about giraffes Passage about giraffes, but the word "giraffe" is replaced by the word "Cam- bridge" Passage about Cambridge, but the word "Cam- bridge" is re- placed by the word "giraffe" Passage text The city of Cambridge is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Cambridge the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Cambridge became an important trading centre. The ï¬rst town charters were granted in the 12th century, although city status was not conferred until 1951. Oxford is a city in the South East region of England and the county town of Oxfordshire. With a population of 159,994 it is the 52nd largest city in the United Kingdom, and one of the fastest growing and most | 1602.01137#23 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 24 | England and the county town of Oxfordshire. With a population of 159,994 it is the 52nd largest city in the United Kingdom, and one of the fastest growing and most ethnically diverse. Oxford has a broad economic base. Its industries include motor manufacturing, education, publishing and a large number of information technology and science-based businesses, some being academic offshoots. The city is known worldwide as the home of the University of Oxford, the oldest university in the English-speaking world. Buildings in Oxford demonstrate examples of every English architectural period since the arrival of the Saxons, including the mid-18th-century Radcliffe Camera. Oxford is known as the city of dreaming spires, a term coined by poet Matthew Arnold. The giraffe (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard-like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classiï¬ed under the family Girafï¬dae, along with its closest | 1602.01137#24 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 25 | ossicones, and its distinctive coat patterns. It is classiï¬ed under the family Girafï¬dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The giraffeâs scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannas, grasslands, and open woodlands. The cambridge (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard- like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classiï¬ed under the family Girafï¬dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The cambridgeâs scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the | 1602.01137#25 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 26 | The cambridgeâs scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. giraffes usually inhabit savannas, grasslands, and open woodlands. The city of Giraffe is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Giraffe the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Giraffe became an important trading centre. The ï¬rst town charters were granted in the 12th century, although city status was not conferred until 1951. DESM (IN-OUT) Score -0.062 -0.070 -0.102 -0.094 -0.076 DESM (IN-IN) Score 0.120 0.107 0.011 0.033 0.088 Term Frequency Count 5 0 0 3 0 | 1602.01137#26 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 27 | Document length normalization. In Equation 5 we normal- ize the scores linearly by both the query and the document lengths. While more sophisticated length normalization strategies, such as pivoted document length normalization [43], are reasonable, we leave this also for future work.
such as BM25, for the non-telescoping evaluation setup described in Section 3.2.
We deï¬ne the mixture model MM(Q, D) as,
M M (Q, D) = αDESM (Q, D) + (1 â α)BM 25(Q, D) α â R, 0 ⤠α ⤠1
# 2.3 The Mixture Model
The DESM is a weak ranker and while it models some important aspects of document ranking, our experiments will show that itâs effective only at ranking at high positions (i.e. documents we already know are at least somewhat relevant). We are inspired by previous work in neural language models, for example by Bengio et al. [4], which demonstrates that combining a neural model for predicting the next word with a more traditional counting-based language model is effective because the two models make different kinds of mistakes. Adopting a similar strategy we propose a simple and intuitive mixture model combining DESM with a term based feature, | 1602.01137#27 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 28 | To choose the appropriate value for α, we perform a parameter sweep between zero and one at intervals of 0.01 on the implicit feedback based training set described in Section 3.1.
# 3. EXPERIMENTS
We compare the retrieval performance of DESM against BM25, a traditional count-based method, and Latent Semantic Analysis (LSA), a traditional vector-based method. We conduct our eval- uations on two different test sets (explicit and implicit relevance judgements) and under two different experimental conditions (a large collection of documents and a telescoped subset).
Table 3: NDCG results comparing the DESMIN âOU T with the BM25 and the LSA baselines. The DESMIN âOU T performs signiï¬cantly better than both the BM25 and the LSA baselines at all rank positions. It also performs better than the DESMIN âIN on both the evaluation sets. The DESMs using embeddings trained on the query corpus also performs better than if trained on document body text. The highest NDCG values for every column is highlighted in bold and all the statistically signiï¬cant (p < 0.05) differences over the BM25 baseline are marked with the asterisk (*). | 1602.01137#28 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 29 | Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) 23.69 22.41* 23.59 23.75 24.06 25.02* 29.14 28.25* 29.59 29.72 30.32* 31.14* 44.77 44.24* 45.51* 46.36* 46.57* 47.89* 13.65 16.35* 18.62* 18.37* 19.67* 20.66* 27.41 31.75* 33.80* 35.18* 35.53* 37.34* 49.26 52.05* 53.32* 54.20* 54.13* 55.84*
# 3.1 Datasets
All the datasets that are used for this study are sampled from Bingâs large scale query logs. The body text for all the candidate documents are extracted from Bingâs document index. | 1602.01137#29 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 30 | All the datasets that are used for this study are sampled from Bingâs large scale query logs. The body text for all the candidate documents are extracted from Bingâs document index.
Explicitly judged test set. This evaluation set consists of 7,741 queries randomly sampled from Bingâs query logs from the period of October, 2014 to December, 2014. For each sampled query, a set of candidate documents is constructed by retrieving the top results from Bing over multiple scrapes during a period of a few months. In total the ï¬nal evaluation set contains 171,302 unique documents across all queries which are then judged by human evaluators on a ï¬ve point relevance scale (Perfect, Excellent, Good, Fair and Bad).
In our non-telescoped experiment, we consider every distinct document in the test set as a candidate for every query in the same dataset. This setup is more in line with the traditional IR evaluation methodologies, where the model needs to retrieve the most relevant documents from a single large document collection. Our empirical results in Section 4 will show that the DESM model is a strong re-ranking signal, but as a standalone ranker, it is prone to false positives. Yet, when we mix our neural model (DESM) with a counting based model (BM25), good performance is achieved. | 1602.01137#30 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 31 | For all the experiments we report the normalized discounted cumulative gain (NDCG) at different rank positions as a measure of performance for the different models under study.
# 3.3 Baseline models
Implicit feedback based test set. This dataset is sampled from the Bing logs from the period of the September 22, 2014 to September 28, 2014. The dataset consists of the search queries submitted by the user and the corresponding documents that were returned by the search engine in response. The documents are associated with a binary relevance judgment based on whether the document was clicked by the user. This test set contains 7,477 queries and the 42,573 distinct documents.
We compare the DESM models to a term-matching based baseline, in BM25, and a vector space model baseline, in Latent Semantic Analysis (LSA)[8]. For the BM25 baseline we use the values of 1.7 for the k1 parameter and 0.95 for the b parameter based on a parameter sweep on the implicit feedback based training set. The LSA model is trained on the body text of 366,470 randomly sampled documents from Bingâs index with a vocabulary size of 480,608 words. Note that unlike the word2vec models that train on word co-occurrence data, the LSA model by default trains on a word- document matrix. | 1602.01137#31 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 32 | Implicit feedback based training set. This dataset is sam- pled exactly the same way as the previous test but from the period of September 15, 2014 to September 21, 2014 and has 7,429 queries and 42,253 distinct documents. This set is used for tuning the parameters for the BM25 baseline and the mixture model.
# 3.2 Experiment Setup
We perform two distinct sets of evaluations for all the experimen- tal and baseline models. In the ï¬rst experiment, we consider all documents retrieved by Bing (from the online scrapes in the case of the explicitly judged set or as recorded in the search logs in the case of the implicit feedback based sets) as the candidate set of documents to be re-ranked for each query. The fact that each of the documents were retrieved by the search engine implies that they are all at least marginally relevant to the query. Therefore, this experi- mental design isolates performance at the top ranks. As mentioned in Section 2.2, there is a parallel between this experiment setup and the telescoping [27] evaluation strategy, and has been used often in recent literature (e.g., [18, 41]). Note that by having a strong retrieval model, in the form of the Bing search engine, for ï¬rst stage retrieval enables us to have a high conï¬dence candidate set and in turn ensures reliable comparison with the baseline BM25 feature.
# 4. RESULTS | 1602.01137#32 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 33 | # 4. RESULTS
Table 3 shows the NCDG based performance evaluations un- der the telescoping setup. On both the explicitly judged and the implicit feedback based test sets the DESMIN âOU T performs sig- niï¬cantly better than the BM25 and the LSA baselines, as well as the DESMIN âIN model. Under the all documents as candidates setup in Table 4, however, the DESMs (both IN-IN and IN-OUT) are clearly seen to not perform well as standalone document rankers. The mixture of DESMIN âOU T (trained on queries) and BM25 rectiï¬es this problem and gives the best NDCG result under the non-telescoping settings and demonstrates a statistically signiï¬cant improvement over the BM25 baseline.
Figure 4 illustrates that the DESMIN âOU T is the most discrimi- nating feature for the relevant and the irrelevant documents retrieved by a ï¬rst stage retrieval system. However, BM25 is clearly superior in separating out the random irrelevant documents in the candidate set. The mixture model, unsurprisingly, has the good properties from both the DESMIN âOU T and the BM25 models. Figure 5 shows the joint distribution of the scores from the different models which further reinforces these points and shows that the DESM and the BM25 models make different errors. | 1602.01137#33 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 34 | Table 4: Results of NDCG evaluations under the non-telescoping settings. Both the DESM and the LSA models perform poorly in the presence of random irrelevant documents in the candidate set. The mixture of DESMIN âOU T with BM25 achieves the best NDCG. The best NDCG values are highlighted per column in bold and all the statistically signiï¬cant (p < 0.05) differences with the BM25 baseline are indicated by the asterisk (*) | 1602.01137#34 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 35 | Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) BM25 + DESM (IN-IN, trained on body text) BM25 + DESM (IN-IN, trained on queries) BM25 + DESM (IN-OUT, trained on body text) BM25 + DESM (IN-OUT, trained on queries) 21.44 04.61* 06.69* 05.56* 01.01* 00.62* 21.53 21.58 21.47 21.54 26.09 04.63* 06.80* 05.59* 01.16* 00.58* 26.16 26.20 26.18 26.42* 37.53 04.83* 07.39* 06.03* 01.58* 00.81* 37.48 37.62 37.55 37.86* 11.68 01.97* 03.39* 02.62* 00.78* 00.29* 11.96 11.91 11.83 12.22* 22.14 | 1602.01137#35 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 37 | We do not report the results of evaluating the mixture models under the telescoping setup because tuning the α parameter under those settings on the training set results in the best performance from the standalone DESM models. Overall, we conclude that the DESM is primarily suited for ranking at top positions or in conjunction with other document ranking features.
Interestingly, under the telescoping settings, the LSA baseline also shows some (albeit small) improvement over the BM25 baseline on the implicit feedback based test set but a loss on the explicitly judged test set.
With respect to the CBOWâs training data, the DESM models with the embeddings trained on the query corpus performs signiï¬cantly better than the models trained on document body text across different conï¬gurations. We have a plausible hypothesis on why this happens. Users tend to choose the most signiï¬cant terms that they expect to match in the target document to formulate their search queries. Therefore in the query corpus, one may say that, the less important terms from the document corpus has been ï¬ltered out. Therefore when training on the query corpus the CBOW model is more likely to see important terms within the context window compared to when trained on a corpus of document body text, which may make it a better training dataset for the Word2vec model.
# 5. RELATED WORK | 1602.01137#37 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 38 | # 5. RELATED WORK
The probabilistic model of information retrieval leads to the de- velopment of the BM25 ranking feature [35]. The increase in BM25 as term frequency increases is justiï¬ed according to the 2-Poisson model [15, 36], which makes a distinction between documents about a term and documents that merely mention that term. Those two types of document have term frequencies from two different Poisson distributions, which justiï¬es the use of term frequency as evidence of aboutness. By contrast, the model introduced in this paper uses the occurrence of other related terms as evidence of aboutness. For example, under the 2-Poisson model a document about Eminem will tend to mention the term âeminemâ repeatedly. Under our all- pairs vector model, a document about Eminem will tend to contain more related terms such as ârapâ, âtracklistâ and âperformsâ. Our experiments show both notions of aboutness to be useful. | 1602.01137#38 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 39 | Neural embeddings for IR. The word embeddings produced by the CBOW and SG models have been shown to be surprisingly effective at capturing detailed semantics useful for various Natural Language Processing (NLP) and reasoning tasks, including word analogies [28, 29]. Recent papers have explored in detail the SG and CBOW training methodology [11, 37] and its connection to other approaches for learning word embeddings such as explicit vector space representations [23, 24], matrix factorization [22, 33, 42] and density-based representations [45]. | 1602.01137#39 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 40 | Term based IR. For an overview of lexical matching approaches for information retrieval, such as the vector space, probabilistic and language modelling approach, see [26]. In Saltonâs classic vector space model [39] queries and documents are represented as sparse vectors in a vector space of dimensionality |V|, where V is the word vocabulary. Elements in the vector are non-zero if that term occurs. Documents can be ranked in descending order of cosine similarity with the query, although a wide variety of weighting and similarity functions are possible [51]. In contrast to the classical vector space model, LSA[8], PLSA[17] and LDA[5, 47] learn dense vector representations of much lower dimensionality. It has been suggested that these models perform poorly as standalone retrieval models [1] unless combined with other TF-IDF like features. In our approach the query and documents are also low dimensional dense vectors. We learn 200-dimensional neural word embeddings, and generate document vectors as the centroids of all the word vectors. Yan et al. [49] suggested that term correlation data is less sparse than term-document matrix and hence may be more effective for training embeddings. | 1602.01137#40 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 41 | Baroni et al. [3] evaluated neural word embeddings against tradi- tional word counting approaches and demonstrated the success of the former on a variety of NLP tasks. However, more recent works [16, 40] have shown that there does not seem to be one embedding approach that is best for all tasks. This observation is similar to ours, where we note that IN-IN and IN-OUT model different kinds of word relationships. Although IN-IN, for example, works well for word analogy tasks [28, 29], it might perform less effectively for other tasks, such as those in information retrieval. If so, instead of claiming that any one embedding captures âsemanticsâ, it is proba- bly better to characterize embeddings according to which tasks they perform well on.
Our paper is not the ï¬rst to apply neural word embeddings in IR. Ganguly et al. [9] recently proposed a generalized language model for IR that incorporates IN-IN similarities. The similarities are used to expand and reweight the terms in each document, which seems to be motivated by intuitions similar to ours, where a term is reinforced if a similar terms occurs in the query. In their case, after greatly expanding the document vocabulary, they perform retrieval based on word occurrences rather than in an embedding space. Word | 1602.01137#41 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 42 | IN-OUT im Rel. EE Irrel. (J) Irrel. (R) 0.20 -0.15 0.10 0.05 0.00 0.05 BM25 lm Rel. Inrel. (J) = Irrel. (R) IN-IN [ | Rel. [ Irrel. (J) Irrel. (R) -0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 BM25 + IN-OUT [a =0.97] (im Rel. (0 Irrel. (J) [= Irrel. (R)
Figure 4: Feature distributions over three sets of documents: Rel. retrieved by Bing and judged relevant, Irrel. (J) retrieved by Bing and judged irrelevant, and Irrel. (R) random documents not retrieved for this query. Our telescoping evaluation setup only uses the ï¬rst two sets, whose distributions are quite close in all four plots. IN-OUT may have the greatest difference between Rel. and Irrel. (J), which corresponds to its good telescoping NDCG results. BM25 is far superior at separating Irrel. (R) results from the rest, which explains the success of BM25 and mixture models in non-telescoping evaluation. | 1602.01137#42 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 43 | embeddings have also been studied in other IR contexts such as term reweighting [50], cross-lingual retrieval [14, 46, 52] and short- text similarity [20]. Beyond word co-occurrence, recent studies have also explored learning text embeddings from clickthrough data [18, 41], session data [12, 13, 30], query preï¬x-sufï¬x pairs [31], via auto-encoders [38], and for sentiment classiï¬cation [44] and for long text[21].
# 6. DISCUSSION AND CONCLUSION | 1602.01137#43 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 44 | # 6. DISCUSSION AND CONCLUSION
We have also identiï¬ed and investigated a failure of embedding- based ranking: performance is highly dependent on the relevancy of the initial candidate set of documents to be ranked. While stand- alone DESM clearly bests BM25 and LSA on ranking telescoped datasets (Table 3), the same embedding model needs to be com- bined with BM25 to perform well on a raw, unï¬ltered document collection (Table 4). However, this is not a signiï¬cant deï¬ciency with the DESM as telescoping is a common initial set in industrial IR pipelines [7]. Moreover, our DESM is especially well suited for late-stage ranking since it incurs little computational overhead, only requiring the documentâs centroid (which can be precomputed and stored) and its cosine similarity with the query. | 1602.01137#44 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 45 | This paper motivated and evaluated the use neural word embed- dings to gauge a documentâs aboutness with respect to a query. Mapping words to points in a shared semantic space allows a query term to be compared against all terms in the document, providing for a reï¬ned relevance scoring. We formulate a Dual Embedding Space Model (DESM) that leverages the often discarded output em- beddings learned by the CBOW model. Our model exploits a novel use of both the input and output embeddings to capture topic-based semantic relationships. The examples in Table1 show that drasti- cally different nearest neighbors can be found by using proximity in the IN-OUT vs the IN-IN spaces. We have demonstrated through intuition and large-scale experimentation that ranking via proximity in IN-OUT space is better for retrieval than IN-IN based rankers. This ï¬nding emphasizes that usage of the CBOW and SG models is application dependent and that quantifying semantic relatedness via cosine similarity in IN space should not be a default practice. | 1602.01137#45 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 46 | In addition to proposing an effective and efï¬cient ranking scheme, our work suggests multiple avenues for further investigation. Can the IN-IN and the IN-OUT based distances be incorporated into other stages of the IR pipeline, such as in pseudo relevance feed- back and for query expansion? Are there better ways to compose word-level embeddings into document-level representations? Is there a principled way to ï¬lter the noisy comparisons that degrade performance on the non-telescoped datasets?
Content-based document retrieval is a difï¬cult problem. Not only is language inherently subtle and ambiguous â allowing for the same ideas to be represented by a multitude of different words â but the appearance of a given word in a document does not necessarily mean that document is relevant. While TF-IDF features such as BM25 are a proven source of evidence for aboutness, they are not sufï¬ciently precise to rank highly relevant documents ahead of fairly relevant | 1602.01137#46 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 47 | Relevant Irrelevant (judged) Irrelevant (unjudged) 0.05 Jueg yucg 1800 120 140 105 1600 0.00 | 120 1400 % 1200 ft â0.05) 4 100 7 5 1000 3 80 60 z 800 = -0.10 J | Ie6o 45 600 -0.15 4} 14° 30 400 20 15 200 0.290 + + 4 0 _2 oR _Sh 0 rr 0 1.0 135 140 1400 0.8 | 120 120 1200 0.6 | 105 100 90 1000 0.4 | z2 80 75 800 = 02 . 60 60 600 45 0.0 4 | Jao 400 30 â0.2 7 | 420 15 200 | a 0 7 A : : 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 BM25 BM25 BM25 | 1602.01137#47 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 48 | Figure 5: Bivariate analysis of our lexical matching and neural word embedding features. On unjudged (random) documents, BM25 is very successful at giving zero score, but both IN-IN and IN-OUT give a range of scores. This explains their poor performance in non-telescoping evaluation. For the judged relevant and judged irrelevant sets, we see a range of cases where both types of feature fail. For example BM25 has both false positives, where an irrelevant document mentions the query terms, and false negatives, where a relevant document does not mention the query terms.
documents. To do that task well, all of a documentâs words must be considered. Neural word embeddings, and speciï¬cally our DESM, provide an effective and efï¬cient way for all words in a document to contribute, resulting in ranking attune to semantic subtleties.
References [1] A. Atreya and C. Elkan. Latent semantic indexing (lsi) fails for trec collections. ACM SIGKDD Explorations Newsletter, 12(2):5â10, 2011.
optimizations for additive machine learned ranking systems. In Proc. WSDM, pages 411â420. ACM, 2010. | 1602.01137#48 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 49 | optimizations for additive machine learned ranking systems. In Proc. WSDM, pages 411â420. ACM, 2010.
[8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. Indexing by latent semantic analysis. JASIS, 41(6):391â407, 1990.
[9] D. Ganguly, D. Roy, M. Mitra, and G. J. Jones. Word embedding based generalized language model for information retrieval. In Proc. SIGIR, pages 795â798. ACM, 2015.
[2] R. Baeza-Yates, P. Boldi, and F. Chierichetti. Essential web pages are easy to ï¬nd. pages 97â107. International World Wide Web Conferences Steering Committee, 2015.
[10] J. Gao, K. Toutanova, and W.-t. Yih. Clickthrough-based latent semantic models for web search. In Proc. SIGIR, pages 675â684. ACM, 2011. | 1602.01137#49 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 50 | [3] M. Baroni, G. Dinu, and G. Kruszewski. Donât count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proc. ACL, volume 1, pages 238â247, 2014.
[4] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. JMLR, 3:1137â1155, 2003.
[11] Y. Goldberg and O. Levy. word2vec explained: deriving mikolov et al.âs negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722, 2014.
[12] M. Grbovic, N. Djuric, V. Radosavljevic, and N. Bhamidipati. Search retargeting using directed query embeddings. In Proc. WWW, pages 37â38. International World Wide Web Conferences Steering Committee, 2015.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3: 993â1022, 2003. | 1602.01137#50 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 51 | [6] A. Bookstein and D. R. Swanson. Probabilistic models for automatic indexing. JASIS, 25(5):312â316, 1974.
[7] B. B. Cambazoglu, H. Zaragoza, O. Chapelle, J. Chen, C. Liao, Z. Zheng, and J. Degenhardt. Early exit
[13] M. Grbovic, N. Djuric, V. Radosavljevic, F. Silvestri, and N. Bhamidipati. Context-and content-aware embeddings for query rewriting in sponsored search. In Proc. SIGIR, pages 383â392. ACM, 2015.
[14] P. Gupta, K. Bali, R. E. Banchs, M. Choudhury, and P. Rosso. Query expansion for mixed-script information retrieval. In Proc. SIGIR, pages 677â686. ACM, 2014.
[15] S. P. Harter. A probabilistic approach to automatic keyword indexing. JASIS, 26(5):280â289, 1975.
[16] F. Hill, K. Cho, S. Jean, C. Devin, and Y. Bengio. Not all neural embeddings are born equal. arXiv preprint arXiv:1410.0718, 2014. | 1602.01137#51 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 52 | [17] T. Hofmann. Probabilistic latent semantic indexing. In Proc. SIGIR, pages 50â57. ACM, 1999.
[18] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333â2338. ACM, 2013.
[19] R. Jones, B. Rey, O. Madani, and W. Greiner. Generating query substitutions. In Proc. WWW â06, pages 387â396, 2006.
[20] T. Kenter and M. de Rijke. Short text similarity with word embeddings. In Proc. CIKM, volume 15, page 115.
[21] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014.
[22] O. Levy and Y. Goldberg. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177â2185, 2014.
[23] O. Levy, Y. Goldberg, and I. Ramat-Gan. Linguistic regularities in sparse and explicit word representations. CoNLL-2014, page 171, 2014. | 1602.01137#52 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 53 | [24] O. Levy, Y. Goldberg, and I. Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211â225, 2015.
[25] M.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. In Proc. ACL, 2015.
[26] C. D. Manning, P. Raghavan, H. Schütze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008.
[27] I. Matveeva, C. Burges, T. Burkard, A. Laucius, and L. Wong. High accuracy retrieval with multiple nested ranker. pages 437â444. ACM, 2006.
[28] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. | 1602.01137#53 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 54 | [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Proc. NIPS, pages 3111â3119, 2013.
[30] B. Mitra. Exploring session context using distributed representations of queries and reformulations. In Proc. SIGIR, pages 3â12. ACM, 2015.
[31] B. Mitra and N. Craswell. Query auto-completion for rare preï¬xes. In Proc. CIKM. ACM, 2015.
[32] E. Nalisnick, B. Mitra, N. Craswell, and R. Caruana. Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Conferences Steering Committee, to appear, 2016.
[33] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proc. EMNLP, 12: 1532â1543, 2014.
[34] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of documentation, 60 (5):503â520, 2004. | 1602.01137#54 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 55 | [34] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of documentation, 60 (5):503â520, 2004.
[35] S. Robertson and H. Zaragoza. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc, 2009.
[36] S. E. Robertson and S. Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. pages 232â241. Springer-Verlag New York, Inc., 1994.
[37] X. Rong. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738, 2014.
[38] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7): 969â978, 2009.
[39] G. Salton, A. Wong, and C.-S. Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11): 613â620, 1975.
[40] T. Schnabel, I. Labutov, D. Mimno, and T. Joachims. Evaluation methods for unsupervised word embeddings. In Proc. EMNLP, 2015. | 1602.01137#55 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 56 | [41] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional neural networks for web search. In Proc. WWW, pages 373â374, 2014.
[42] T. Shi and Z. Liu. Linking glove with word2vec. arXiv preprint arXiv:1411.5595, 2014.
[43] A. Singhal, C. Buckley, and M. Mitra. Pivoted document length normalization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 21â29. ACM, 1996.
[44] D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin. Learning sentiment-speciï¬c word embedding for twitter sentiment classiï¬cation. In Proc. ACL, volume 1, pages 1555â1565, 2014.
[45] L. Vilnis and A. McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014. | 1602.01137#56 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 57 | [45] L. Vilnis and A. McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014.
[46] I. Vuli´c and M.-F. Moens. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proc. SIGIR, pages 363â372. ACM, 2015.
[47] X. Wei and W. B. Croft. Lda-based document models for ad-hoc retrieval. In Proc. SIGIR, pages 178â185. ACM, 2006.
[48] B. J. Wilson and A. M. J. Schakel. Controlled experiments for word embeddings. arXiv preprint arXiv:1510.02675, 2015.
[49] X. Yan, J. Guo, S. Liu, X. Cheng, and Y. Wang. Learning topics in short texts by non-negative matrix factorization on term correlation matrix. In Proceedings of the SIAM International Conference on Data Mining, 2013.
[50] G. Zheng and J. Callan. Learning to reweight terms with distributed representations. In Proc. SIGIR, pages 575â584. ACM, 2015. | 1602.01137#57 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.01137 | 58 | [50] G. Zheng and J. Callan. Learning to reweight terms with distributed representations. In Proc. SIGIR, pages 575â584. ACM, 2015.
[51] J. Zobel and A. Moffat. Exploring the similarity space. In ACM SIGIR Forum, volume 32, pages 18â34. ACM, 1998.
[52] W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393â1398, 2013. | 1602.01137#58 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | [
{
"id": "1510.02675"
}
] |
1602.00367 | 0 | 6 1 0 2
b e F 1 ] L C . s c [
1 v 7 6 3 0 0 . 2 0 6 1 : v i X r a
# Efï¬cient Character-level Document Classiï¬cation by Combining Convolution and Recurrent Layers
Yijun Xiao Center for Data Sciences, New York University [email protected]
Kyunghyun Cho Courant Institute and Center for Data Science, New York University [email protected]
# Abstract
Document classiï¬cation tasks were primar- ily tackled at word level. Recent research that works with character-level inputs shows several beneï¬ts over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We pro- pose a neural network architecture that utilizes both convolution and recurrent layers to efï¬- ciently encode character inputs. We validate the proposed model on eight large scale doc- ument classiï¬cation tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.
1
# Introduction | 1602.00367#0 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 1 | 1
# Introduction
Document classiï¬cation is a task in natural language processing where one needs to assign a single or multiple predeï¬ned categories to a sequence of text. A conventional approach to document classiï¬cation generally consists of a feature extraction stage fol- lowed by a classiï¬cation stage. For instance, it is usual to use a TF-IDF vector of a given document as an input feature to a subsequent classiï¬er.
More recently, it has become more common to use a deep neural network, which jointly performs fea- ture extraction and classiï¬cation, for document clas- siï¬cation (Kim, 2014; Mesnil et al., 2014; Socher et al., 2013; Carrier and Cho, 2014). In most cases, an input document is represented as a sequence of words, of which each is presented as a one-hot vec- tor.1 Each word in the sequence is projected into a
1 A one-hot vector of the i-th word is a binary vector whose | 1602.00367#1 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 2 | 1 A one-hot vector of the i-th word is a binary vector whose
continuous vector space by being multiplied with a weight matrix, forming a sequence of dense, real- valued vectors. This sequence is then fed into a deep neural network which processes the sequence in multiple layers, resulting in a prediction proba- bility. This whole pipeline, or a network, is tuned jointly to maximize the classiï¬cation accuracy on a training set.
One important aspect of these recent approaches based on deep learning is that they often work at the level of words. Despite its recent success, the word- level approach has a number of major shortcomings. First, it is statistically inefï¬cient, as each word token is considered separately and estimated by the same number of parameters, despite the fact that many words share common root, preï¬x or sufï¬x. This can be overcome by using an external mechanism to seg- ment each word and infer its components (root, pre- ï¬x, sufï¬x), but this is not desirable as the mechanism is highly language-dependent and is tuned indepen- dently from the target objective of document classi- ï¬cation. | 1602.00367#2 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 3 | Second, the word-level approach cannot handle out-of-vocabulary words. Any word that is not present or rare in a training corpus, is mapped to an unknown word token. This is problematic, be- cause the model cannot handle typos easily, which happens frequently in informal documents such as postings from social network sites. Also, this makes it difï¬cult to use a trained model to a new domain, as there may be large mismatch between the domain of the training corpus and the target domain.
elements are all zeros, except for the i-th element which is set to one. | 1602.00367#3 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 4 | elements are all zeros, except for the i-th element which is set to one.
Recently this year, a number of researchers have noticed that it is not at all necessary for a deep neu- ral network to work at the word level. As long as the document is represented as a sequence of one-hot vectors, the model works without any change, re- gardless of whether each one-hot vector corresponds to a word, a sub-word unit or a character. Based on this intuition, Kim et al. (Kim et al., 2015) and Ling et al. (Ling et al., 2015) proposed to use a char- acter sequence as an alternative to the word-level one-hot vector. A similar idea was applied to de- pendency parsing in (Ballesteros et al., 2015). The work in this direction, most relevant to this paper, is the character-level convolutional network for doc- ument classiï¬cation by Zhang et al. (Zhang et al., 2015). | 1602.00367#4 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 5 | The character-level convolutional net in (Zhang et al., 2015) is composed of many layers of convolu- tion and max-pooling, similarly to the convolutional network in computer vision (see, e.g., (Krizhevsky et al., 2012).) Each layer ï¬rst extracts features from small, overlapping windows of the input sequence and pools over small, non-overlapping windows by taking the maximum activations in the window. This is applied recursively (with untied weights) for many times. The ï¬nal convolutional layerâs activation is ï¬attened to form a vector which is then fed into a small number of fully-connected layers followed by the classiï¬cation layer.
We notice that the use of a vanilla convolutional network for character-level document classiï¬cation has one shortcoming. As the receptive ï¬eld of each convolutional layer is often small (7 or 3 in (Zhang et al., 2015),) the network must have many layers in order to capture long-term dependencies in an in- put sentence. This is likely the reason why Zhang et al. (Zhang et al., 2015) used a very deep convo- lutional network with six convolutional layers fol- lowed by two fully-connected layers. | 1602.00367#5 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 6 | In order to overcome this inefï¬ciency in model- ing a character-level sequence, in this paper we pro- pose to make a hybrid of convolutional and recur- rent networks. This was motivated by recent suc- cesses of applying recurrent networks to natural lan- guages (see, e.g., (Cho et al., 2014; Sundermeyer et al., 2015)) and from the fact that the recurrent net- work can efï¬ciently capture long-term dependencies even with a single layer. The hybrid model processes
an input sequence of characters with a number of convolutional layers followed by a single recurrent layer. Because the recurrent layer, consisting of ei- ther gated recurrent units (GRU, (Cho et al., 2014) or long short-term memory units (LSTM, (Hochre- iter and Schmidhuber, 1997; Gers et al., 2000), can efï¬ciently capture long-term dependencies, the pro- posed network only needs a very small number of convolutional layers. | 1602.00367#6 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 7 | We empirically validate the proposed model, to which we refer as a convolution-recurrent network, large-scale document classiï¬cation on the eight tasks from (Zhang et al., 2015). We mainly com- pare the proposed model against the convolutional network in (Zhang et al., 2015) and show that it is indeed possible to use a much smaller model to achieve the same level of classiï¬cation performance when a recurrent layer is put on top of the convolu- tional layers.
# 2 Basic Building Blocks: Neural Network Layers
In this section, we describe four basic layers in a neural network that will be used later to constitute a single network for classifying a document.
# 2.1 Embedding Layer | 1602.00367#7 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 8 | In this section, we describe four basic layers in a neural network that will be used later to constitute a single network for classifying a document.
# 2.1 Embedding Layer
As mentioned earlier, each document is represented as a sequence of one-hot vectors. A one-hot vector of the i-th symbol in a vocabulary is a binary vector whose elements are all zeros except for the i-th ele- ment which is set to one. Therefore, each document is a sequence of T one-hot vectors (x1, x2, . . . , xT ). An embedding layer projects each of the one- hot vectors into a d-dimensional continuous vec- tor space Rd. This is done by simply multiplying the one-hot vector from left with a weight matrix W â RdÃ|V |, where |V | is the number of unique symbols in a vocabulary:
et = Wxt.
After the embedding layer, the input sequence of one-hot vectors becomes a sequence of dense, real- valued vectors (e1, e2, . . . , eT ).
# 2.2 Convolutional Layer
A convolutional layer consists of two stages. In the first stage, a set of dâ filters of receptive field size r, | 1602.00367#8 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 9 | # 2.2 Convolutional Layer
A convolutional layer consists of two stages. In the first stage, a set of dâ filters of receptive field size r,
F ⬠R®*", is applied to the input sequence: f, = O(F [ep (rjayqas e+ Ft +++ Cr4(r/2)])> where ¢ is a nonlinear activation function such as tanh or a rectifier. This is done for every time step of the input sequence, resulting in a sequence Fâ = (f1, fo,..., fr).
The resulting sequence Fis max-pooled with size r:
where max applies for each element of the vectors, resulting in a sequence
1 t Ui Uy Fâ = (f[, f5,..., T)r!):
# 2.3 Recurrent Layer
A recurrent layer consists of a recursive function f which takes as input one input vector and the previ- ous hidden state, and returns the new hidden state:
hy = f (Xt, hi-1), where x; ⬠R? is one time step from the input se- quence (x, X9,...,x7). ho ⬠R is often initial- ized as an all-zero vector.
Recursive Function The most naive recursive function is implemented as
ht = tanh (Wxxt + Uhhtâ1) , | 1602.00367#9 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 10 | Recursive Function The most naive recursive function is implemented as
ht = tanh (Wxxt + Uhhtâ1) ,
where W,, ⬠Râ*â and U,, ⬠Râ*â' are the weight matrices. This naive recursive function however is known to suffer from the problem of vanishing gra- dient (Bengio et al., 1994} Hochreiter et al., 2001).
More recently it is common to use a more com- plicated function that learns to control the ï¬ow of information so as to prevent the vanishing gradient and allows the recurrent layer to more easily capture long-term dependencies. Long short-term memory (LSTM) unit from (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) is a representative example. The LSTM unit consists of four sub-unitsâinput, output, forget gates and candidate memory cell, which are computed by
it = Ï (Wixt + Uihtâ1) , ot = Ï (Woxt + Uohtâ1) , ft = Ï (Wf xt + Uf htâ1) , Ëct = tanh (Wcxt + Uchtâ1) .
Based on these, the LSTM unit ï¬rst computes the memory cell:
oe =hOGt+h Oc, | 1602.00367#10 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 11 | Based on these, the LSTM unit ï¬rst computes the memory cell:
oe =hOGt+h Oc,
and computes the output, or activation:
hy, = 0; © tanh(c;).
The resulting sequence from the recurrent layer is
# then
(h1, h2, . . . , hT ), where T is the length of the input sequence to the layer.
Bidirectional Recurrent Layer One property of the recurrent layer is that there is imbalance in the amount of information seen by the hidden states at different time steps. The earlier hidden states only observe a few vectors from the lower layer, while the later ones are computed based on the most of the lower-layer vectors. This can be easily alleviated by having a bidirectional recurrent layer which is com- posed of two recurrent layers working in opposite directions. This layer will return two sequences of hidden states from the forward and reverse recurrent layers, respectively.
# 2.4 Classiï¬cation Layer | 1602.00367#11 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 12 | # 2.4 Classiï¬cation Layer
A classiï¬cation layer is in essence a logistic re- gression classiï¬er. Given a ï¬xed-dimensional input from the lower layer, the classiï¬cation layer afï¬ne- transforms it followed by a softmax activation func- tion (Bridle, 1990) to compute the predictive proba- bilities for all the categories. This is done by
exp(w) x + by) Sh exp(wjix + by)â ply = k|X)
where wkâs and bkâs are the weight and bias vectors. We assume there are K categories.
It is worth noting that this classiï¬cation layer takes as input a ï¬xed-dimensional vector, while the recurrent layer or convolutional layer returns a variable-length sequence of vectors (the length de- termined by the input sequence). This can be ad- dressed by either simply max-pooling the vectors (Kim, 2014) over the time dimension (for both con- volutional and recurrent layers), taking the last hid- den state (for recurrent layers) or taking the last hid- den states of the forward and reverse recurrent net- works (for bidirectional recurrent layers.)
# 3 Character-Level Convolutional-Recurrent Network | 1602.00367#12 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 13 | # 3 Character-Level Convolutional-Recurrent Network
In this section, we propose a hybrid of convolutional and recurrent networks for character-level document classiï¬cation.
# 3.1 Motivation
One basic motivation for using the convolutional layer is that it learns to extract higher-level features that are invariant to local translation. By stack- ing multiple convolutional layers, the network can extract higher-level, abstract, (locally) translation- invariant features from the input sequence, in this case the document, efï¬ciently.
Despite this advantage, we noticed that it requires many layers of convolution to capture long-term de- pendencies, due to the locality of the convolution and pooling (see Sec. 2.2.) This becomes more se- vere as the length of the input sequence grows, and in the case of character-level modeling, it is usual for a document to be a sequence of hundreds or thou- sands of characters. Ultimately, this leads to the need for a very deep network having many convo- lutional layers. | 1602.00367#13 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 14 | Contrary to the convolutional layer, the recurrent layer from Sec. 2.3 is able to capture long-term de- pendencies even when there is only a single layer. This is especially true in the case of a bidirectional recurrent layer, because each hidden state is com- puted based on the whole input sequence. However, the recurrent layer is computationally more expen- sive. The computational complexity grows linearly with respect to the length of the input sequence, and most of the computations need to be done sequen- tially. This is in contrast to the convolutional layer for which computations can be efï¬ciently done in parallel.
Based on these observations, we propose to com- bine the convolutional and recurrent layers into a single model so that this network can capture long- term dependencies in the document more efï¬ciently for the task of classiï¬cation.
# 3.2 Model Description
The proposed model, convolution-recurrent network (ConvRec),
p(y|X) Classification Layers Sec. 2.4 (Recurrent ee Layers /se0. 2.3 ( Embedding Layer iS Sec. 2.1 (11,22, eeegd ry)
(a) (b) | 1602.00367#14 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 15 | (a) (b)
P(y|X) Classification Layers Sec. 2.4 Convolutional Layers Sec. 2.2 ( Embedding Layer ie Sec. 2.1 (11,22, eeegd ry)
Figure 1: Graphical illustration of (a) the convolutional net- work and (b) the proposed convolution-recurrent network for character-level document classiï¬cation.
with a one-hot sequence input
X = (x1, x2, . . . , xT ).
This input sequence is turned into a sequence of dense, real-valued vectors
E = (e1, e2, . . . , eT )
using the embedding layer from Sec. 2.1.
We apply multiple convolutional layers (Sec. 2.2) to E to get a shorter sequence of feature vectors:
This feature vector is then fed into a bidirectional recurrent layer (Sec. 2.3), resulting in two sequences
> Hyorward = (hi, hg,..., hr), Freverse = (hi, ho, ae) hrâ).
We take the last hidden states of both directions and concatenate them to form a ï¬xed-dimensional vec- tor:
h= [Br hi] . | 1602.00367#15 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 16 | We take the last hidden states of both directions and concatenate them to form a ï¬xed-dimensional vec- tor:
h= [Br hi] .
Finally, the ï¬xed-dimensional vector h is fed into the classiï¬cation layer to compute the predictive probabilities p(y = k|X) of all the categories k = 1, . . . , K given the input sequence X.
See Fig. 1 (b) for the graphical illustration of the proposed model.
Data set Classes Task Training size Test size AGâs news Sogou news DBPedia Yelp review polarity Yelp review full Yahoo! Answers Amazon review polarity Amazon review full 4 5 14 2 5 10 2 5 news categorization news categorization ontology classiï¬cation sentiment analysis sentiment analysis question type classiï¬cation sentiment analysis sentiment analysis 120,000 450,000 560,000 560,000 650,000 1,400,000 3,600,000 3,000,000 7,600 60,000 70,000 38,000 50,000 60,000 400,000 650,000
Table 1: Data sets summary.
# 3.3 Related Work | 1602.00367#16 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 17 | Table 1: Data sets summary.
# 3.3 Related Work
Convolutional network for document classiï¬ca- tion The convolutional networks for document classiï¬cation, proposed earlier in (Kim, 2014; Zhang et al., 2015) and illustrated in Fig. 1 (a), is almost identical to the proposed model. One ma- jor difference is the lack of the recurrent layer in their models. Their model consists of the embedding layer, a number of convolutional layers followed by the classiï¬cation layer only.
Recurrent network for document classiï¬cation Carrier and Cho in (Carrier and Cho, 2014) give a tutorial on using a recurrent neural network for sen- timent analysis which is one type of document clas- siï¬cation. Unlike the convolution-recurrent network proposed in this paper, they do not use any convolu- tional layer in their model. Their model starts with the embedding layer followed by the recurrent layer. The hidden states from the recurrent layer are then averaged and fed into the classiï¬cation layer. | 1602.00367#17 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 18 | Hybrid model: Conv-GRNN Perhaps the most related work is the convolution-gated recurrent neu- ral net (Conv-GRNN) from (Tang et al., 2015). They proposed a hierarchical processing of a document. In their model, either a convolutional network or a recurrent network is used to extract a feature vector from each sentence, and another (bidirectional) re- current network is used to extract a feature vector of the document by reading the sequence of sentence vectors. This document vector is used by the classi- ï¬cation layer.
work. In their model, the convolutional network is strictly constrained to model each sentence, and the recurrent network to model inter-sentence struc- tures. On the other hand, the proposed ConvRec network uses a recurrent layer in order to assist the convolutional layers to capture long-term dependen- cies (across the whole document) more efï¬ciently. These are orthogonal to each other, and it is possi- ble to plug in the proposed ConvRec as a sentence feature extraction module in the Conv-GRNN from (Tang et al., 2015). Similarly, it is possible to use the proposed ConvRec as a composition function for the sequence of sentence vectors to make computation more efï¬cient, especially when the input document consists of many sentences. | 1602.00367#18 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 19 | Recursive Neural Networks A recursive neural network has been applied to sentence classiï¬cation earlier (see, e.g., (Socher et al., 2013).) In this ap- proach, a composition function is deï¬ned and recur- sively applied at each node of the parse tree of an input sentence to eventually extract a feature vector of the sentence. This model family is heavily de- pendent on an external parser, unlike all the other models such as the ConvRec proposed here as well as other related models described above. It is also not trivial to apply the recursive neural network to documents which consist of multiple sentences. We do not consider this family of recursive neural net- works directly related to the proposed model.
# 4 Experiment Settings
The major difference between their approach and the proposed ConvRec is in the purpose of com- bining the convolutional network and recurrent net# 4.1 Task Description
We validate the proposed model on eight large-scale document classiï¬cation tasks from (Zhang et al., | 1602.00367#19 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 20 | We validate the proposed model on eight large-scale document classiï¬cation tasks from (Zhang et al.,
Embedding Layer Convolutional Layer Recurrent Layer Model Sec.|2 Sec. Sec. |V| d dâ r r o d C2RIDD 5,3 2,2 C3RIDD 5,5,3 2,2,2 C4RIDD 6 8 5,533 2,222 Rev D C5RIDD 5,5,3,3,3 2,2,2,1,2
Table 2: Different architectures tested in this paper.
2015). The sizes of the data sets range from 200,000 to 4,000,000 documents. These tasks include senti- ment analysis (Yelp reviews, Amazon reviews), on- tology classiï¬cation (DBPedia), question type clas- siï¬cation (Yahoo! Answers), and news categoriza- tion (AGâs news, Sogou news).
Dropout (Srivastava et al., 2014) is an effective way to regularize deep neural networks. We apply dropout after the last convolutional layer as well as after the recurrent layer. Without dropout, the inputs to the recurrent layer xtâs are | 1602.00367#20 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 21 | Data Sets A summary of the statistics for each data set is listed in Table 1. There are equal num- ber of examples in each class for both training and test sets. DBPedia data set, for example, has 40,000 training and 5,000 test examples per class. For more detailed information on the data set construction process, see (Zhang et al., 2015).
x, =f;
where f; is the ¢-th output from the last convolutional layer defined in Sec. After adding dropout, we have
ri ~ Bernoulli(p) ~ gl x =rOf
# 4.2 Model Settings
p is the dropout probability which we set to 0.5; 7} is the i-th component of the binary vector r, ⬠R®.
Referring to Sec. 2.1, the vocabulary V for our experiments consists of 96 characters including all upper-case and lower-case letters, digits, common punctuation marks, and spaces. Character embed- ding size d is set to 8. | 1602.00367#21 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 22 | As described in Sec. 3-1] we believe by adding re- current layers, one can effectively reduce the num- ber of convolutional layers needed in order to cap- ture long-term dependencies. Thus for each data set, we consider models with two to five convolutional layers. Following notations in Sec. each layer has dâ = 128 filters. For AGâs news and Yahoo! An- swers, we also experiment larger models with 1,024 filters in the convolutional layers. Receptive field size r is either five or three depending on the depth. Max pooling size râ is set to 2. Rectified linear units (ReLUs, (Glorot et al., 2011)) are used as activation functions in the convolutional layers. The recurrent layer (Sec. is fixed to a single layer of bidi- rectional LSTM for all models. Hidden states di- mension dâ is set to 128. More detailed setups are described in Table[2}
# 4.3 Training and Validation
For each of the data sets, we randomly split the full training examples into training and validation. The validation size is the same as the corresponding test size and is balanced in each class. | 1602.00367#22 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 23 | For each of the data sets, we randomly split the full training examples into training and validation. The validation size is the same as the corresponding test size and is balanced in each class.
The models are trained by minimizing the follow- ing regularized negative log-likelihood or cross en- tropy loss. Xâs and yâs are document character se- quences and their corresponding observed class as- signments in the training set D. w is the collec- tion of model weights. Weight decay is applied with λ = 5 à 10â4.
MN. 1=â $7 log(v(ylX)) + Sill? XyEeD
We train our models using AdaDelta with p = 0.95, ⬠= 10~> and a batch size of 128. Examples are padded to the longest sequence in each batch and masks are generated to help iden- tify the padded region. The corresponding masks of | 1602.00367#23 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 24 | Data set #Ex. #Cl. Network #Params â Error (%) Network #Params_ Error (%) AG 120k 4 C2R1D1024 20M 8.39/8.64 C6F2D1024 27â¢M. -/9.85 Sogou 450k 5 C3R1D128 AM 4.82/4.83 C6F2D1024* 27â¢M. -/4.88 DBPedia 560k 14 C2R1D128 3M 1.46/1.43 C6F2D1024 27â¢M. -/1.66 Yelp P. 560k 2 C2R1D128 3M 5.50/5.51 C6F2D1024 27â¢M. -/5.25 Yelp F. 650k 5 C2R1D128 3M 38.00/38.18 | C6F2D1024 27â¢M. -/38.40 Yahoo A. 1.4M 10 | C2R1D1024 20M 28.62/28.26 | C6F2D1024* 27â¢M. -/29.55 Amazon P. || 3.6M 2 C3R1D128 AM 5.64/5.87 C6F2D256* 2.7M -/5.50 Amazon F. || 3.0M 5 | 1602.00367#24 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 26 | Table 3: Results on character-level document classification. CCRRFFDD refers to a network with C convolutional layers, R recurrent layers, F' fully-connected layers and D dimensional feature vectors. * denotes a model which does not distinguish between lower-case and upper-case letters. We only considered the character-level models without using Thesaraus-based data augmentation. We report both the validation and test errors. In our case, the network architecture for each dataset was selected based on the validation errors. The numbers of parameters are approximate.
the outputs from convolutional layers can be com- puted analytically and are used by the recurrent layer to properly ignore padded inputs. The gradient of the cost function is computed with backpropagation through time (BPTT, (Werbos, 1990p). If the gra- dient has an L2 norm larger than 5, we rescale the gradient by a factor of Tan Le.
leh) llglle Zc = g-min (1. | 1602.00367#26 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 27 | leh) llglle Zc = g-min (1.
dw and gc is the clipped gradient. Early stopping strategy is employed to prevent Before training, we set an initial overï¬tting. patience value. At each epoch, we calculate and record the validation loss. If it is lower than the current lowest validation loss by 0.5%, we extend patience by two. Training stops when the number of epochs is larger than patience. We report the test error rate evaluated using the model with the lowest validation error.
# 5 Results and Analysis
Experimental results are listed in Table 3. We com- pare to the best character-level convolutional model without data augmentation from (Zhang et al., 2015) on each data set. Our model achieves comparable performances for all the eight data sets with signiï¬- cantly less parameters. Speciï¬cally, it performs bet- ter on AGâs news, Sogou news, DBPedia, Yelp re- view full, and Yahoo! Answers data sets. | 1602.00367#27 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 28 | Number of classes Fig. 2 (a) shows how relative performance of our model changes with respect to It is worth noting that as the number of classes. the number of classes increases, our model achieves better results compared to convolution-only models. For example, our model has a much lower test er- ror on DBPedia which has 14 classes, but it scores worse on Yelp review polarity and Amazon review polarity both of which have only two classes. Our conjecture is that more detailed and complete infor- mation needs to be preserved from the input text for the model to assign one of many classes to it. The convolution-only model likely loses detailed local features because it has more pooling layers. On the other hand, the proposed model with less pooling layers can better maintain the detailed information and hence performs better when such needs exist.
Number of training examples Although it is less signiï¬cant, Fig. 2 (b) shows that the proposed model generally works better compared to the convolution- only model when the data size is small. Considering the difference in the number of parameters, we sus- pect that because the proposed model is more com- pact, it is less prone to overï¬tting. Therefore it gen- eralizes better when the training size is limited. | 1602.00367#28 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 29 | Number of convolutional layers An interesting observation from our experiments is that the model accuracy does not always increase with the number of convolutional layers. Performances peak at two or three convolutional layers and decrease if we add
e i=) L x iJ Sf} x 5 vo o x gv 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes
10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) -15
e i=) L x iJ Sf} x 5 vo o x 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes (a) 10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) (b) -15
(a)
(b)
Figure 2: Relative test performance of the proposed model compared to the convolution-only model w.r.t. (a) the number of classes and (b) the size of training set. Lower is better. | 1602.00367#29 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 30 | more to the model. As more convolutional layers produce longer character n-grams, this indicates that there is an optimal level of local features to be fed into the recurrent layer. Also, as discussed above, more pooling layers likely lead to the lost of detailed information which in turn affects the ability of the recurrent layer to capture long-term dependencies.
Number of ï¬lters We experiment large models with 1,024 ï¬lters on AGâs news and Yahoo! An- swers data sets. Although adding more ï¬lters in the convolutional layers does help with the model per- formances on these two data sets, the gains are lim- ited compared to the increased number of parame- ters. Validation error improves from 8.75% to 8.39% for AGâs news and from 29.48% to 28.62% for Ya- hoo! Answers at the cost of a 70 times increase in the number of model parameters.
Note that in our model we set the number of ï¬l- ters in the convolutional layers to be the same as the dimension of the hidden states in the recurrent layer. It is possible to use more ï¬lters in the convolutional layers while keeping the recurrent layer dimension the same to potentially get better performances with less sacriï¬ce of the number of parameters.
information. | 1602.00367#30 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 31 | information.
We validated the proposed model on eight large scale document classiï¬cation tasks. The model achieved comparable results with much less convo- lutional layers compared to the convolution-only ar- chitecture. We further discussed several aspects that affect the model performance. The proposed model generally performs better when number of classes is large, training size is small, and when the number of convolutional layers is set to two or three.
The proposed model is a general encoding archi- tecture that is not limited to document classiï¬ca- tion tasks or natural language inputs. For example, (Chen et al., 2015; Visin et al., 2015) combined con- volution and recurrent layers to tackle image seg- mentation tasks; (Sainath et al., 2015) applied a sim- ilar model to do speech recognition. It will be inter- esting to see future research on applying the archi- tecture to other applications such as machine trans- lation and music information retrieval. Using recur- rent layers as substitutes for pooling layers to poten- tially reduce the lost of detailed local information is also a direction that worth exploring.
# 6 Conclusion
# Acknowledgments | 1602.00367#31 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 32 | # 6 Conclusion
# Acknowledgments
In this paper, we proposed a hybrid model that pro- cesses an input sequence of characters with a num- ber of convolutional layers followed by a single re- current layer. The proposed model is able to encode documents from character level capturing sub-word
This work is done as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University.
# References
[Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. arXiv preprint arXiv:1508.00657.
[Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬cult. Neural Net- works, IEEE Transactions on, 5(2):157â166.
[Bridle1990] John S Bridle. 1990. Probabilistic interpre- tation of feedforward classiï¬cation network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227â236. Springer. | 1602.00367#32 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 33 | [Carrier and Cho2014] Pierre Luc Carrier and LSTM networks for Kyunghyun Cho. sentiment analysis. Deep Learning Tutorials. 2014.
[Chen et al.2015] Liang-Chieh Chen, Jonathan T. Bar- ron, George Papandreou, Kevin Murphy, and Alan L. Yuille. 2015. Semantic image segmentation with task-speciï¬c edge detection using cnns and a dis- CoRR, criminatively trained domain transform. abs/1511.03328.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP 2014). [Gers et al.2000] Felix A Gers, J¨urgen Schmidhuber, and 2000. Learning to forget: Con- Neural computation,
Fred Cummins. tinual prediction with lstm. 12(10):2451â2471. | 1602.00367#33 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 34 | Fred Cummins. tinual prediction with lstm. 12(10):2451â2471.
[Glorot et al.2011] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectiï¬er neural networks. In Geoffrey J. Gordon and David B. Dun- son, editors, Proceedings of the Fourteenth Interna- tional Conference on Artiï¬cial Intelligence and Statis- tics (AISTATS-11), volume 15, pages 315â323. Journal of Machine Learning Research - Workshop and Con- ference Proceedings.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Hochreiter et al.2001] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jï¬rgen Schmidhuber. 2001. Gra- dient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies, volume 1. IEEE.
[Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- 2015. Character- arXiv preprint language models. | 1602.00367#34 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
1602.00367 | 35 | [Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- 2015. Character- arXiv preprint language models.
tag, and Alexander M Rush. aware neural arXiv:1508.06615. [Kim2014] Yoon Kim.
2014. Convolutional neural networks for sentence classiï¬cation. arXiv preprint arXiv:1408.5882.
[Krizhevsky et al.2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classiï¬cation with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, edi- tors, Advances in Neural Information Processing Sys- tems 25, pages 1097â1105. Curran Associates, Inc. [Ling et al.2015] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096. | 1602.00367#35 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | [
{
"id": "1508.06615"
},
{
"id": "1508.02096"
},
{
"id": "1508.00657"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.