id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1602.01783#2
Asynchronous Methods for Deep Reinforcement Learning
line RL updates are strongly correlated. By storing the agentâ s data in an experience replay memory, the data can be batched (Riedmiller, 2005; Schulman et al., 2015a) or randomly sampled (Mnih et al., 2013; 2015; Van Hasselt et al., 2015) from different time-steps. Aggregating over memory in this way reduces non-stationarity and decorre- lates updates, but at the same time limits the methods to off-policy reinforcement learning algorithms. Deep RL algorithms based on experience replay have achieved unprecedented success in challenging domains such as Atari 2600. However, experience replay has several drawbacks: it uses more memory and computation per real interaction; and it requires off-policy learning algorithms that can update from data generated by an older policy. In this paper we provide a very different paradigm for deep reinforcement learning. Instead of experience replay, we asynchronously execute multiple agents in parallel, on mul- tiple instances of the environment. This parallelism also decorrelates the agentsâ data into a more stationary process, since at any given time-step the parallel agents will be ex- periencing a variety of different states. This simple idea enables a much larger spectrum of fundamental on-policy RL algorithms, such as Sarsa, n-step methods, and actor- critic methods, as well as off-policy RL algorithms such as Q-learning, to be applied robustly and effectively using deep neural networks. Our parallel reinforcement learning paradigm also offers practical beneï¬
1602.01783#1
1602.01783#3
1602.01783
[ "1509.02971" ]
1602.01783#3
Asynchronous Methods for Deep Reinforcement Learning
ts. Whereas previous approaches to deep re- inforcement learning rely heavily on specialized hardware such as GPUs (Mnih et al., 2015; Van Hasselt et al., 2015; Schaul et al., 2015) or massively distributed architectures (Nair et al., 2015), our experiments run on a single machine with a standard multi-core CPU. When applied to a vari- ety of Atari 2600 domains, on many games asynchronous reinforcement learning achieves better results, in far less Asynchronous Methods for Deep Reinforcement Learning time than previous GPU-based algorithms, using far less resource than massively distributed approaches. The best of the proposed methods, asynchronous advantage actor- critic (A3C), also mastered a variety of continuous motor control tasks as well as learned general strategies for ex- ploring 3D mazes purely from visual inputs. We believe that the success of A3C on both 2D and 3D games, discrete and continuous action spaces, as well as its ability to train feedforward and recurrent agents makes it the most general and successful reinforcement learning agent to date. # 2. Related Work
1602.01783#2
1602.01783#4
1602.01783
[ "1509.02971" ]
1602.01783#4
Asynchronous Methods for Deep Reinforcement Learning
The General Reinforcement Learning Architecture (Gorila) of (Nair et al., 2015) performs asynchronous training of re- inforcement learning agents in a distributed setting. In Go- rila, each process contains an actor that acts in its own copy of the environment, a separate replay memory, and a learner that samples data from the replay memory and computes gradients of the DQN loss (Mnih et al., 2015) with respect to the policy parameters. The gradients are asynchronously sent to a central parameter server which updates a central copy of the model. The updated policy parameters are sent to the actor-learners at ï¬
1602.01783#3
1602.01783#5
1602.01783
[ "1509.02971" ]
1602.01783#5
Asynchronous Methods for Deep Reinforcement Learning
xed intervals. By using 100 sep- arate actor-learner processes and 30 parameter server in- stances, a total of 130 machines, Gorila was able to signif- icantly outperform DQN over 49 Atari games. On many games Gorila reached the score achieved by DQN over 20 times faster than DQN. We also note that a similar way of parallelizing DQN was proposed by (Chavez et al., 2015). proaches have recently been applied to some visual rein- forcement learning tasks. In one example, (Koutnà k et al., 2014) evolved convolutional neural network controllers for the TORCS driving simulator by performing ï¬ tness evalu- ations on 8 CPU cores in parallel.
1602.01783#4
1602.01783#6
1602.01783
[ "1509.02971" ]
1602.01783#6
Asynchronous Methods for Deep Reinforcement Learning
# 3. Reinforcement Learning Background We consider the standard reinforcement learning setting where an agent interacts with an environment ⠬ over a number of discrete time steps. At each time step t, the agent receives a state s; and selects an action a; from some set of possible actions A according to its policy 7, where m is a mapping from states s; to actions a,. In return, the agent receives the next state s,4 1 and receives a scalar re- ward r;. The process continues until the agent reaches a terminal state after which the process restarts. The return R= Yro 7*rt.x is the total accumulated return from time step ¢ with discount factor 7 ⠬ (0, 1]. The goal of the agent is to maximize the expected return from each state s;. The action value Q*(s,a) = E[R;|s; = s,a] is the ex- pected return for selecting action a in state s and follow- ing policy 7. The optimal value function Q*(s,a) = max, Q*(s,a) gives the maximum action value for state s and action a achievable by any policy. Similarly, the value of state s under policy 7 is defined as V"(s) = E [R,|s_ = s] and is simply the expected return for follow- ing policy 7 from state s.
1602.01783#5
1602.01783#7
1602.01783
[ "1509.02971" ]
1602.01783#7
Asynchronous Methods for Deep Reinforcement Learning
In earlier work, (Li & Schuurmans, 2011) applied the Map Reduce framework to parallelizing batch reinforce- ment learning methods with linear function approximation. Parallelism was used to speed up large matrix operations but not to parallelize the collection of experience or sta- bilize learning. (Grounds & Kudenko, 2008) proposed a parallel version of the Sarsa algorithm that uses multiple separate actor-learners to accelerate training. Each actor- learner learns separately and periodically sends updates to weights that have changed signiï¬ cantly to the other learn- ers using peer-to-peer communication. In value-based model-free reinforcement learning methods, the action value function is represented using a function ap- proximator, such as a neural network. Let Q(s, a; θ) be an approximate action-value function with parameters θ. The updates to θ can be derived from a variety of reinforcement learning algorithms. One example of such an algorithm is Q-learning, which aims to directly approximate the optimal action value function:
1602.01783#6
1602.01783#8
1602.01783
[ "1509.02971" ]
1602.01783#8
Asynchronous Methods for Deep Reinforcement Learning
Qâ (s, a) â Q(s, a; θ). In one-step Q-learning, the parameters θ of the action value function Q(s, a; θ) are learned by iteratively minimizing a sequence of loss functions, where the ith loss function deï¬ ned as (Tsitsiklis, 1994) studied convergence properties of Q- learning in the asynchronous optimization setting. These results show that Q-learning is still guaranteed to converge when some of the information is outdated as long as out- dated information is always eventually discarded and sev- eral other technical assumptions are satisï¬
1602.01783#7
1602.01783#9
1602.01783
[ "1509.02971" ]
1602.01783#9
Asynchronous Methods for Deep Reinforcement Learning
ed. Even earlier, (Bertsekas, 1982) studied the related problem of distributed dynamic programming. Another related area of work is in evolutionary meth- ods, which are often straightforward to parallelize by dis- tributing ï¬ tness evaluations over multiple machines or threads (Tomassini, 1999). Such parallel evolutionary ap- 2 L;(0;) =E (r + ymax Q(sâ , aâ ; 0-1) â Q(s, a; 64)) a where sâ is the state encountered after state s. We refer to the above method as one-step Q-learning be- cause it updates the action value Q(s,a) toward the one- step return r + ymaxq Q(sâ ,aâ ;@).
1602.01783#8
1602.01783#10
1602.01783
[ "1509.02971" ]
1602.01783#10
Asynchronous Methods for Deep Reinforcement Learning
One drawback of us- ing one-step methods is that obtaining a reward r only di- rectly affects the value of the state action pair s, a that led to the reward. The values of other state action pairs are affected only indirectly through the updated value Q(s, a). This can make the learning process slow since many up- dates are required the propagate a reward to the relevant preceding states and actions. Asynchronous Methods for Deep Reinforcement Learning One way of propagating rewards faster is by using n- step returns (Watkins, 1989; Peng & Williams, 1996). In n-step Q-learning, Q(s, a) is updated toward the n- step return deï¬ ned as rt + γrt+1 + · · · + γnâ 1rt+nâ 1 + maxa γnQ(st+n, a). This results in a single reward r di- rectly affecting the values of n preceding state action pairs. This makes the process of propagating rewards to relevant state-action pairs potentially much more efï¬
1602.01783#9
1602.01783#11
1602.01783
[ "1509.02971" ]
1602.01783#11
Asynchronous Methods for Deep Reinforcement Learning
cient. In contrast to value-based methods, policy-based model- free methods directly parameterize the policy Ï (a|s; θ) and update the parameters θ by performing, typically approx- imate, gradient ascent on E[Rt]. One example of such a method is the REINFORCE family of algorithms due to Williams (1992). Standard REINFORCE updates the policy parameters θ in the direction â θ log Ï (at|st; θ)Rt, which is an unbiased estimate of â θE[Rt]. It is possible to reduce the variance of this estimate while keeping it unbi- ased by subtracting a learned function of the state bt(st), known as a baseline (Williams, 1992), from the return. The resulting gradient is â θ log Ï (at|st; θ) (Rt â bt(st)).
1602.01783#10
1602.01783#12
1602.01783
[ "1509.02971" ]
1602.01783#12
Asynchronous Methods for Deep Reinforcement Learning
Algorithm 1 Asynchronous one-step Q-learning - pseu- docode for each actor-learner thread. docode for each actor-learner thread. // Assume global shared 0, 0~, and counter T = 0. Initialize thread step counter t <- 0 Initialize target network weights 6~ < 6 Initialize network gradients dO + 0 Get initial state s repeat Take action a with e-greedy policy based on Q(s, a; 0) Receive new state sâ and reward r for terminal sâ y= for non-terminal sâ r r+ ymaxa Q(sâ ,aâ ;07) Accumulate gradients wrt 6: dO < d@ + y= Q(s.036))* , s=s T<T+landt+t+1 ifT mod Itarget == 0 then Update the target network 0~ < 0 end if ift mod [Asyncupdate == 0 or s is terminal then Perform asynchronous update of 6 using d@. Clear gradients d@ + 0. end if until T > Tmax A learned estimate of the value function is commonly used as the baseline bt(st) â V Ï (st) leading to a much lower variance estimate of the policy gradient. When an approx- imate value function is used as the baseline, the quantity Rt â bt used to scale the policy gradient can be seen as an estimate of the advantage of action at in state st, or A(at, st) = Q(at, st)â V (st), because Rt is an estimate of QÏ (at, st) and bt is an estimate of V Ï (st).
1602.01783#11
1602.01783#13
1602.01783
[ "1509.02971" ]
1602.01783#13
Asynchronous Methods for Deep Reinforcement Learning
This approach can be viewed as an actor-critic architecture where the pol- icy Ï is the actor and the baseline bt is the critic (Sutton & Barto, 1998; Degris et al., 2012). # 4. Asynchronous RL Framework learners running in parallel are likely to be exploring dif- ferent parts of the environment. Moreover, one can explic- itly use different exploration policies in each actor-learner to maximize this diversity. By running different explo- ration policies in different threads, the overall changes be- ing made to the parameters by multiple actor-learners ap- plying online updates in parallel are likely to be less corre- lated in time than a single agent applying online updates. Hence, we do not use a replay memory and rely on parallel actors employing different exploration policies to perform the stabilizing role undertaken by experience replay in the DQN training algorithm. We now present multi-threaded asynchronous variants of one-step Sarsa, one-step Q-learning, n-step Q-learning, and advantage actor-critic.
1602.01783#12
1602.01783#14
1602.01783
[ "1509.02971" ]
1602.01783#14
Asynchronous Methods for Deep Reinforcement Learning
The aim in designing these methods was to ï¬ nd RL algorithms that can train deep neural net- work policies reliably and without large resource require- ments. While the underlying RL methods are quite dif- ferent, with actor-critic being an on-policy policy search method and Q-learning being an off-policy value-based method, we use two main ideas to make all four algorithms practical given our design goal. In addition to stabilizing learning, using multiple parallel actor-learners has multiple practical beneï¬
1602.01783#13
1602.01783#15
1602.01783
[ "1509.02971" ]
1602.01783#15
Asynchronous Methods for Deep Reinforcement Learning
ts. First, we ob- tain a reduction in training time that is roughly linear in the number of parallel actor-learners. Second, since we no longer rely on experience replay for stabilizing learning we are able to use on-policy reinforcement learning methods such as Sarsa and actor-critic to train neural networks in a stable way. We now describe our variants of one-step Q- learning, one-step Sarsa, n-step Q-learning and advantage actor-critic. First, we use asynchronous actor-learners, similarly to the Gorila framework (Nair et al., 2015), but instead of using separate machines and a parameter server, we use multi- ple CPU threads on a single machine. Keeping the learn- ers on a single machine removes the communication costs of sending gradients and parameters and enables us to use Hogwild! (Recht et al., 2011) style updates for training. Second, we make the observation that multiple actors- Asynchronous one-step Q-learning: Pseudocode for our variant of Q-learning, which we call Asynchronous one- step Q-learning, is shown in Algorithm 1. Each thread in- teracts with its own copy of the environment and at each step computes a gradient of the Q-learning loss. We use a shared and slowly changing target network in comput- ing the Q-learning loss, as was proposed in the DQN train- ing method. We also accumulate gradients over multiple timesteps before they are applied, which is similar to us- Asynchronous Methods for Deep Reinforcement Learning ing minibatches. This reduces the chances of multiple ac- tor learners overwriting each otherâ s updates. Accumulat- ing updates over several steps also provides some ability to trade off computational efï¬ ciency for data efï¬ ciency.
1602.01783#14
1602.01783#16
1602.01783
[ "1509.02971" ]
1602.01783#16
Asynchronous Methods for Deep Reinforcement Learning
Finally, we found that giving each thread a different explo- ration policy helps improve robustness. Adding diversity to exploration in this manner also generally improves per- formance through better exploration. While there are many possible ways of making the exploration policies differ we experiment with using ¢-greedy exploration with â ¬ periodi- cally sampled from some distribution by each thread. by tmax. The pseudocode for the algorithm is presented in Supplementary Algorithm S3. As with the value-based methods we rely on parallel actor- learners and accumulated updates for improving training stability. Note that while the parameters θ of the policy and θv of the value function are shown as being separate for generality, we always share some of the parameters in practice. We typically use a convolutional neural network that has one softmax output for the policy Ï (at|st; θ) and one linear output for the value function V (st; θv), with all non-output layers shared. Asynchronous one-step Sarsa: The asynchronous one- step Sarsa algorithm is the same as asynchronous one-step Q-learning as given in Algorithm 1 except that it uses a dif- ferent target value for Q(s,a). The target value used by one-step Sarsa is r + yQ(sâ ,aâ ;6â ) where aâ is the action taken in state sâ
1602.01783#15
1602.01783#17
1602.01783
[ "1509.02971" ]
1602.01783#17
Asynchronous Methods for Deep Reinforcement Learning
(Rummery & Niranjan, 1994; Sutton & Barto, 1998). We again use a target network and updates accumulated over multiple timesteps to stabilize learning. Asynchronous n-step Q-learning: Pseudocode for our variant of multi-step Q-learning is shown in Supplementary Algorithm S2. The algorithm is somewhat unusual because it operates in the forward view by explicitly computing n- step returns, as opposed to the more common backward view used by techniques like eligibility traces (Sutton & Barto, 1998). We found that using the forward view is eas- ier when training neural networks with momentum-based methods and backpropagation through time. In order to compute a single update, the algorithm ï¬ rst selects actions using its exploration policy for up to tmax steps or until a terminal state is reached. This process results in the agent receiving up to tmax rewards from the environment since its last update. The algorithm then computes gradients for n-step Q-learning updates for each of the state-action pairs encountered since the last update. Each n-step update uses the longest possible n-step return resulting in a one-step update for the last state, a two-step update for the second last state, and so on for a total of up to tmax updates. The accumulated updates are applied in a single gradient step. We also found that adding the entropy of the policy 7 to the objective function improved exploration by discouraging premature convergence to suboptimal deterministic poli- cies. This technique was originally proposed by (Williams & Peng, 1991), who found that it was particularly help- ful on tasks requiring hierarchical behavior. The gradi- ent of the full objective function including the entropy regularization term with respect to the policy parame- ters takes the form Vy log (az| 51; 6â )(Ri â V(s13 90) + BV o H((s1;6â )), where H is the entropy. The hyperpa- rameter 6 controls the strength of the entropy regulariza- tion term. Optimization: We investigated three different optimiza- tion algorithms in our asynchronous framework â SGD with momentum, RMSProp (Tieleman & Hinton, 2012) without shared statistics, and RMSProp with shared statis- tics. We used the standard non-centered RMSProp update given by
1602.01783#16
1602.01783#18
1602.01783
[ "1509.02971" ]
1602.01783#18
Asynchronous Methods for Deep Reinforcement Learning
9 Ae g=ag+ (1â a)AM and 6 + 6 "Tare (1) where all operations are performed elementwise. A com- parison on a subset of Atari 2600 games showed that a vari- ant of RMSProp where statistics g are shared across threads is considerably more robust than the other two methods. Full details of the methods and comparisons are included in Supplementary Section 7. Asynchronous advantage actor-critic: The algorithm, which we call asynchronous advantage actor-critic (A3C), maintains a policy 7(a,|s,;@) and an estimate of the value function V(s;;0,). Like our variant of n-step Q-learning, our variant of actor-critic also operates in the forward view and uses the same mix of n-step returns to update both the policy and the value-function. The policy and the value function are updated after every t,,q, actions or when a terminal state is reached. The update performed by the al- gorithm can be seen as Vy log (az |51; 6â ) A(Sz, at; 9, Ov) where A(s;, a1; 9, 0,,) is an estimate of the advantage func- tion given by Yh9 Viren: + 7*V (Stan: Ov) â V(s15 80), where k can vary from state to state and is upper-bounded # 5. Experiments We use four different platforms for assessing the properties of the proposed framework. We perform most of our exper- iments using the Arcade Learning Environment (Bellemare et al., 2012), which provides a simulator for Atari 2600 games. This is one of the most commonly used benchmark environments for RL algorithms. We use the Atari domain to compare against state of the art results (Van Hasselt et al., 2015; Wang et al., 2015; Schaul et al., 2015; Nair et al., 2015; Mnih et al., 2015), as well as to carry out a detailed stability and scalability analysis of the proposed methods. We performed further comparisons using the TORCS 3D car racing simulator (Wymann et al., 2013). We also use Asynchronous Methods for Deep Reinforcement Learning
1602.01783#17
1602.01783#19
1602.01783
[ "1509.02971" ]
1602.01783#19
Asynchronous Methods for Deep Reinforcement Learning
Beamrider Breakout 16000 600 30 â pon â DeN 14000 __ 1-step Q â Lstep Q â I step SARSA 500 Lstep SARSA 20 12000 ih etep Q ABC â nsstep Q 10000 8000 Score 6000 4000 2000 0 -30 0 2 4 6 8 1012 14 0 2 4 6 8 10 12 14 0 2 Training time (hours) Training time (hours) â DON 4000 â 1step Q â Lsstep SARSA â mstep Q ABC Training time (hours) Pong 12000 Q*bert 1600 Space Invaders â DON â DON oovo â 2-step Q 1400 â 1-step Q â 1-step SARSA â 1-step SARSA â n-step Q 1200 pstep Q 8000 3c 1000 A3C 6000 2000 0 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 1012 14 Training time (hours) Training time (hours) Figure 1. Learning speed comparison for DQN and the new asynchronous algorithms on ï¬
1602.01783#18
1602.01783#20
1602.01783
[ "1509.02971" ]
1602.01783#20
Asynchronous Methods for Deep Reinforcement Learning
ve Atari 2600 games. DQN was trained on a single Nvidia K40 GPU while the asynchronous methods were trained using 16 CPU cores. The plots are averaged over 5 runs. In the case of DQN the runs were for different seeds with ï¬ xed hyperparameters. For asynchronous methods we average over the best 5 models from 50 experiments with learning rates sampled from LogU nif orm(10â 4, 10â 2) and all other hyperparameters ï¬ xed. two additional domains to evaluate only the A3C algorithm â Mujoco and Labyrinth. MuJoCo (Todorov, 2015) is a physics simulator for evaluating agents on continuous mo- tor control tasks with contact dynamics. Labyrinth is a new 3D environment where the agent must learn to ï¬ nd rewards in randomly generated mazes from a visual input.
1602.01783#19
1602.01783#21
1602.01783
[ "1509.02971" ]
1602.01783#21
Asynchronous Methods for Deep Reinforcement Learning
The pre- cise details of our experimental setup can be found in Sup- plementary Section 8. Method DQN Gorila D-DQN Dueling D-DQN Prioritized DQN A3C, FF A3C, FF A3C, LSTM Training Time 8 days on GPU 4 days, 100 machines 8 days on GPU 8 days on GPU 8 days on GPU 1 day on CPU 4 days on CPU 4 days on CPU Mean Median 121.9% 47.5% 215.2% 71.3% 332.9% 110.9% 343.8% 117.1% 463.6% 127.6% 344.1% 68.2% 496.8% 116.6% 623.0% 112.6% # 5.1.
1602.01783#20
1602.01783#22
1602.01783
[ "1509.02971" ]
1602.01783#22
Asynchronous Methods for Deep Reinforcement Learning
Atari 2600 Games We ï¬ rst present results on a subset of Atari 2600 games to demonstrate the training speed of the new methods. Fig- ure 1 compares the learning speed of the DQN algorithm trained on an Nvidia K40 GPU with the asynchronous methods trained using 16 CPU cores on ï¬ ve Atari 2600 games. The results show that all four asynchronous meth- ods we presented can successfully train neural network controllers on the Atari domain. The asynchronous meth- ods tend to learn faster than DQN, with signiï¬ cantly faster learning on some games, while training on only 16 CPU cores. Additionally, the results suggest that n-step methods learn faster than one-step methods on some games. Over- all, the policy-based advantage actor-critic method signiï¬ - cantly outperforms all three value-based methods. We then evaluated asynchronous advantage actor-critic on 57 Atari games. In order to compare with the state of the art in Atari game playing, we largely followed the train- ing and evaluation protocol of (Van Hasselt et al., 2015).
1602.01783#21
1602.01783#23
1602.01783
[ "1509.02971" ]
1602.01783#23
Asynchronous Methods for Deep Reinforcement Learning
Speciï¬ cally, we tuned hyperparameters (learning rate and amount of gradient norm clipping) using a search on six Atari games (Beamrider, Breakout, Pong, Q*bert, Seaquest and Space Invaders) and then ï¬ xed all hyperparameters for all 57 games. We trained both a feedforward agent with the same architecture as (Mnih et al., 2015; Nair et al., 2015; Van Hasselt et al., 2015) as well as a recurrent agent with an additional 256 LSTM cells after the ï¬ nal hidden layer. We additionally used the ï¬ nal network weights for evaluation to make the results more comparable to the original results Table 1. Mean and median human-normalized scores on 57 Atari games using the human starts evaluation metric. Supplementary Table SS3 shows the raw scores for all games. from (Bellemare et al., 2012).
1602.01783#22
1602.01783#24
1602.01783
[ "1509.02971" ]
1602.01783#24
Asynchronous Methods for Deep Reinforcement Learning
We trained our agents for four days using 16 CPU cores, while the other agents were trained for 8 to 10 days on Nvidia K40 GPUs. Table 1 shows the average and median human-normalized scores obtained by our agents trained by asynchronous advantage actor-critic (A3C) as well as the current state-of-the art. Supplementary Table S3 shows the scores on all games. A3C signiï¬ cantly improves on state-of-the-art the average score over 57 games in half the training time of the other methods while using only 16 CPU cores and no GPU. Fur- thermore, after just one day of training, A3C matches the average human normalized score of Dueling Double DQN and almost reaches the median human normalized score of Gorila. We note that many of the improvements that are presented in Double DQN (Van Hasselt et al., 2015) and Dueling Double DQN (Wang et al., 2015) can be incorpo- rated to 1-step Q and n-step Q methods presented in this work with similar potential improvements.
1602.01783#23
1602.01783#25
1602.01783
[ "1509.02971" ]
1602.01783#25
Asynchronous Methods for Deep Reinforcement Learning
# 5.2. TORCS Car Racing Simulator We also compared the four asynchronous methods on the TORCS 3D car racing game (Wymann et al., 2013). TORCS not only has more realistic graphics than Atari 2600 games, but also requires the agent to learn the dy- namics of the car it is controlling. At each step, an agent received only a visual input in the form of an RGB image Asynchronous Methods for Deep Reinforcement Learning of the current frame as well as a reward proportional to the agentâ s velocity along the center of the track at the agentâ s current position.
1602.01783#24
1602.01783#26
1602.01783
[ "1509.02971" ]
1602.01783#26
Asynchronous Methods for Deep Reinforcement Learning
We used the same neural network archi- tecture as the one used in the Atari experiments speciï¬ ed in Supplementary Section 8. We performed experiments us- ing four different settings â the agent controlling a slow car with and without opponent bots, and the agent controlling a fast car with and without opponent bots. Full results can be found in Supplementary Figure S6. A3C was the best per- forming agent, reaching between roughly 75% and 90% of the score obtained by a human tester on all four game con- ï¬ gurations in about 12 hours of training. A video showing the learned driving behavior of the A3C agent can be found at https://youtu.be/0xo1Ldx3L5Q. 1 Method 1-step Q 1.0 1-step SARSA 1.0 1.0 n-step Q 1.0 A3C Number of threads 2 3.0 2.8 2.7 2.1 4 6.3 5.9 5.9 3.7 8 13.3 13.1 10.7 6.9 16 24.1 22.1 17.2 12.5 Table 2. The average training speedup for each method and num- ber of threads averaged over seven Atari games. To compute the training speed-up on a single game we measured the time to re- quired reach a ï¬ xed reference score using each method and num- ber of threads. The speedup from using n threads on a game was deï¬ ned as the time required to reach a ï¬ xed reference score using one thread divided the time required to reach the reference score using n threads. The table shows the speedups averaged over seven Atari games (Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders).
1602.01783#25
1602.01783#27
1602.01783
[ "1509.02971" ]
1602.01783#27
Asynchronous Methods for Deep Reinforcement Learning
# 5.3. Continuous Action Control Using the MuJoCo Physics Simulator We also examined a set of tasks where the action space is continuous. In particular, we looked at a set of rigid body physics domains with contact dynamics where the tasks include many examples of manipulation and loco- motion. These tasks were simulated using the Mujoco physics engine. We evaluated only the asynchronous ad- vantage actor-critic algorithm since, unlike the value-based methods, it is easily extended to continuous actions. In all problems, using either the physical state or pixels as in- put, Asynchronous Advantage-Critic found good solutions in less than 24 hours of training and typically in under a few hours. Some successful policies learned by our agent can be seen in the following video https://youtu.be/ Ajjc08-iPx8.
1602.01783#26
1602.01783#28
1602.01783
[ "1509.02971" ]
1602.01783#28
Asynchronous Methods for Deep Reinforcement Learning
Further details about this experiment can be found in Supplementary Section 9. # 5.4. Labyrinth We performed an additional set of experiments with A3C on a new 3D environment called Labyrinth. The speciï¬ c task we considered involved the agent learning to ï¬ nd re- wards in randomly generated mazes. At the beginning of each episode the agent was placed in a new randomly gen- erated maze consisting of rooms and corridors. Each maze contained two types of objects that the agent was rewarded for ï¬
1602.01783#27
1602.01783#29
1602.01783
[ "1509.02971" ]
1602.01783#29
Asynchronous Methods for Deep Reinforcement Learning
nding â apples and portals. Picking up an apple led to a reward of 1. Entering a portal led to a reward of 10 after which the agent was respawned in a new random location in the maze and all previously collected apples were regener- ated. An episode terminated after 60 seconds after which a new episode would begin. The aim of the agent is to collect as many points as possible in the time limit and the optimal strategy involves ï¬ rst ï¬ nding the portal and then repeatedly going back to it after each respawn. This task is much more challenging than the TORCS driving domain because the agent is faced with a new maze in each episode and must learn a general strategy for exploring random mazes. We trained an A3C LSTM agent on this task using only 84 à 84 RGB images as input.
1602.01783#28
1602.01783#30
1602.01783
[ "1509.02971" ]
1602.01783#30
Asynchronous Methods for Deep Reinforcement Learning
The ï¬ nal average score of around 50 indicates that the agent learned a reason- able strategy for exploring random 3D maxes using only a visual input. A video showing one of the agents ex- ploring previously unseen mazes is included at https: //youtu.be/nMR5mjCFZCw. # 5.5. Scalability and Data Efï¬ ciency We analyzed the effectiveness of our proposed framework by looking at how the training time and data efï¬ ciency changes with the number of parallel actor-learners. When using multiple workers in parallel and updating a shared model, one would expect that in an ideal case, for a given task and algorithm, the number of training steps to achieve a certain score would remain the same with varying num- bers of workers. Therefore, the advantage would be solely due to the ability of the system to consume more data in the same amount of wall clock time and possibly improved exploration. Table 2 shows the training speed-up achieved by using increasing numbers of parallel actor-learners av- eraged over seven Atari games. These results show that all four methods achieve substantial speedups from using mul- tiple worker threads, with 16 threads leading to at least an order of magnitude speedup.
1602.01783#29
1602.01783#31
1602.01783
[ "1509.02971" ]
1602.01783#31
Asynchronous Methods for Deep Reinforcement Learning
This conï¬ rms that our pro- posed framework scales well with the number of parallel workers, making efï¬ cient use of resources. Somewhat surprisingly, asynchronous one-step Q-learning and Sarsa algorithms exhibit superlinear speedups that cannot be explained by purely computational gains. We observe that one-step methods (one-step Q and one-step Sarsa) often require less data to achieve a particular score when using more parallel actor-learners. We believe this is due to positive effect of multiple threads to reduce the bias in one-step methods. These effects are shown more clearly in Figure 3, which shows plots of the average score against the total number of training frames for different Asynchronous Methods for Deep Reinforcement Learning
1602.01783#30
1602.01783#32
1602.01783
[ "1509.02971" ]
1602.01783#32
Asynchronous Methods for Deep Reinforcement Learning
26, Pong 236 oer 6, Space Invaders Figure 2. Scatter plots of scores obtained by asynchronous advantage actor-critic on ï¬ ve games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. On each game, there is a wide range of learning rates for which all random initializations acheive good scores. This shows that A3C is quite robust to learning rates and initial random weights. numbers of actor-learners and training methods on ï¬ ve Atari games, and Figure 4, which shows plots of the av- erage score against wall-clock time. # 5.6. Robustness and Stability substantially improve the data efï¬ ciency of these methods by reusing old data. This could in turn lead to much faster training times in domains like TORCS where interacting with the environment is more expensive than updating the model for the architecture we used. Finally, we analyzed the stability and robustness of the four proposed asynchronous algorithms. For each of the four algorithms we trained models on ï¬ ve games (Break- out, Beamrider, Pong, Q*bert, Space Invaders) using 50 different learning rates and random initializations. Figure 2 shows scatter plots of the resulting scores for A3C, while Supplementary Figure S11 shows plots for the other three methods. There is usually a range of learning rates for each method and game combination that leads to good scores, indicating that all methods are quite robust to the choice of learning rate and random initialization. The fact that there are virtually no points with scores of 0 in regions with good learning rates indicates that the methods are stable and do not collapse or diverge once they are learning.
1602.01783#31
1602.01783#33
1602.01783
[ "1509.02971" ]
1602.01783#33
Asynchronous Methods for Deep Reinforcement Learning
# 6. Conclusions and Discussion We have presented asynchronous versions of four standard reinforcement learning algorithms and showed that they are able to train neural network controllers on a variety of domains in a stable manner. Our results show that in our proposed framework stable training of neural networks through reinforcement learning is possible with both value- based and policy-based methods, off-policy as well as on- policy methods, and in discrete as well as continuous do- mains. When trained on the Atari domain using 16 CPU cores, the proposed asynchronous algorithms train faster than DQN trained on an Nvidia K40 GPU, with A3C sur- passing the current state-of-the-art in half the training time. Combining other existing reinforcement learning meth- ods or recent advances in deep reinforcement learning with our asynchronous framework presents many possibil- ities for immediate improvements to the methods we pre- sented. While our n-step methods operate in the forward view (Sutton & Barto, 1998) by using corrected n-step re- turns directly as targets, it has been more common to use the backward view to implicitly combine different returns through eligibility traces (Watkins, 1989; Sutton & Barto, 1998; Peng & Williams, 1996). The asynchronous ad- vantage actor-critic method could be potentially improved by using other ways of estimating the advantage function, such as generalized advantage estimation of (Schulman et al., 2015b). All of the value-based methods we inves- tigated could beneï¬ t from different ways of reducing over- estimation bias of Q-values (Van Hasselt et al., 2015; Belle- mare et al., 2016). Yet another, more speculative, direction is to try and combine the recent work on true online tempo- ral difference methods (van Seijen et al., 2015) with non- linear function approximation. In addition to these algorithmic improvements, a number of complementary improvements to the neural network ar- chitecture are possible. The dueling architecture of (Wang et al., 2015) has been shown to produce more accurate es- timates of Q-values by including separate streams for the state value and advantage in the network. The spatial soft- max proposed by (Levine et al., 2015) could improve both value-based and policy-based methods by making it easier for the network to represent feature coordinates.
1602.01783#32
1602.01783#34
1602.01783
[ "1509.02971" ]
1602.01783#34
Asynchronous Methods for Deep Reinforcement Learning
One of our main ï¬ ndings is that using parallel actor- learners to update a shared model had a stabilizing effect on the learning process of the three value-based methods we considered. While this shows that stable online Q-learning is possible without experience replay, which was used for this purpose in DQN, it does not mean that experience re- play is not useful. Incorporating experience replay into the asynchronous reinforcement learning framework could # ACKNOWLEDGMENTS We thank Thomas Degris, Remi Munos, Marc Lanctot, Sasha Vezhnevets and Joseph Modayil for many helpful discussions, suggestions and comments on the paper. We also thank the DeepMind evaluation team for setting up the environments used to evaluate the agents in the paper. Asynchronous Methods for Deep Reinforcement Learning
1602.01783#33
1602.01783#35
1602.01783
[ "1509.02971" ]
1602.01783#35
Asynchronous Methods for Deep Reinforcement Learning
% 2000 Training enone § e000 Figure 3. Data efï¬ ciency comparison of different numbers of actor-learners for three asynchronous methods on ï¬ ve Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average over the three best learning rates. Single step methods show increased data efï¬ ciency from more parallel workers. Results for Sarsa are shown in Supplementary Figure S9.
1602.01783#34
1602.01783#36
1602.01783
[ "1509.02971" ]
1602.01783#36
Asynchronous Methods for Deep Reinforcement Learning
Traning ume hous) â rang tie nous) e000 eamrier o000 creer 600 space mvaders § 000 § e000 Trang ume nous) Figure 4. Training speed comparison of different numbers of actor-learners on ï¬ ve Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average over the three best learning rates. All asynchronous methods show signiï¬ cant speedups from using greater numbers of parallel actor-learners. Results for Sarsa are shown in Supplementary Figure S10. Asynchronous Methods for Deep Reinforcement Learning # References
1602.01783#35
1602.01783#37
1602.01783
[ "1509.02971" ]
1602.01783#37
Asynchronous Methods for Deep Reinforcement Learning
Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research, 2012. Bellemare, Marc G., Ostrovski, Georg, Guez, Arthur, Thomas, Philip S., and Munos, Rémi. Increasing the ac- tion gap: New operators for reinforcement learning. In Proceedings of the AAAI Conference on Artiï¬ cial Intel- ligence, 2016. Bertsekas, Dimitri P. Distributed dynamic programming. Automatic Control, IEEE Transactions on, 27(3):610â 616, 1982.
1602.01783#36
1602.01783#38
1602.01783
[ "1509.02971" ]
1602.01783#38
Asynchronous Methods for Deep Reinforcement Learning
Chavez, Kevin, Ong, Hao Yi, and Hong, Augustus. Dis- tributed deep q-learning. Technical report, Stanford Uni- versity, June 2015. Degris, Thomas, Pilarski, Patrick M, and Sutton, Richard S. Model-free reinforcement learning with continuous ac- tion in practice. In American Control Conference (ACC), 2012, pp. 2177â 2182. IEEE, 2012. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wierstra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning.
1602.01783#37
1602.01783#39
1602.01783
[ "1509.02971" ]
1602.01783#39
Asynchronous Methods for Deep Reinforcement Learning
Nature, 518(7540):529â 533, 02 2015. URL http://dx.doi.org/10.1038/nature14236. Nair, Arun, Srinivasan, Praveen, Blackwell, Sam, Alci- cek, Cagdas, Fearon, Rory, Maria, Alessandro De, Pan- neershelvam, Vedavyas, Suleyman, Mustafa, Beattie, Charles, Petersen, Stig, Legg, Shane, Mnih, Volodymyr, Kavukcuoglu, Koray, and Silver, David.
1602.01783#38
1602.01783#40
1602.01783
[ "1509.02971" ]
1602.01783#40
Asynchronous Methods for Deep Reinforcement Learning
Massively par- allel methods for deep reinforcement learning. In ICML Deep Learning Workshop. 2015. Peng, Jing and Williams, Ronald J. Incremental multi-step q-learning. Machine Learning, 22(1-3):283â 290, 1996. Recht, Benjamin, Re, Christopher, Wright, Stephen, and Niu, Feng. Hogwild: A lock-free approach to paralleliz- ing stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 693â
1602.01783#39
1602.01783#41
1602.01783
[ "1509.02971" ]
1602.01783#41
Asynchronous Methods for Deep Reinforcement Learning
701, 2011. Grounds, Matthew and Kudenko, Daniel. Parallel rein- forcement learning with linear function approximation. In Proceedings of the 5th, 6th and 7th European Confer- ence on Adaptive and Learning Agents and Multi-agent Systems: Adaptation and Multi-agent Learning, pp. 60â 74. Springer-Verlag, 2008. Riedmiller, Martin. Neural ï¬ tted q iterationâ ï¬ rst experi- ences with a data efï¬
1602.01783#40
1602.01783#42
1602.01783
[ "1509.02971" ]
1602.01783#42
Asynchronous Methods for Deep Reinforcement Learning
cient neural reinforcement learning method. In Machine Learning: ECML 2005, pp. 317â 328. Springer Berlin Heidelberg, 2005. Koutnà k, Jan, Schmidhuber, Jürgen, and Gomez, Faustino. Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary com- putation, pp. 541â 548. ACM, 2014. Rummery, Gavin A and Niranjan, Mahesan. On-line q- learning using connectionist systems. 1994. Schaul, Tom, Quan, John, Antonoglou, Ioannis, and Sil- ver, David. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning (ICML), 2015a. Li, Yuxi and Schuurmans, Dale. Mapreduce for parallel re- inforcement learning. In Recent Advances in Reinforce- ment Learning - 9th European Workshop, EWRL 2011, Athens, Greece, September 9-11, 2011, Revised Selected Papers, pp. 309â
1602.01783#41
1602.01783#43
1602.01783
[ "1509.02971" ]
1602.01783#43
Asynchronous Methods for Deep Reinforcement Learning
320, 2011. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep re- inforcement learning. arXiv preprint arXiv:1509.02971, 2015. Sutton, R. and Barto, A. Reinforcement Learning: an In- troduction. MIT Press, 1998. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude.
1602.01783#42
1602.01783#44
1602.01783
[ "1509.02971" ]
1602.01783#44
Asynchronous Methods for Deep Reinforcement Learning
COURSERA: Neural Networks for Machine Learning, 4, 2012. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforce- ment learning. In NIPS Deep Learning Workshop. 2013. Todorov, E. MuJoCo: Modeling, Simulation and Visual- ization of Multi-Joint Dynamics with Contact (ed 1.0). Roboti Publishing, 2015. Asynchronous Methods for Deep Reinforcement Learning
1602.01783#43
1602.01783#45
1602.01783
[ "1509.02971" ]
1602.01783#45
Asynchronous Methods for Deep Reinforcement Learning
Tomassini, Marco. Parallel and distributed evolutionary al- gorithms: A review. Technical report, 1999. Tsitsiklis, John N. Asynchronous stochastic approxima- tion and q-learning. Machine Learning, 16(3):185â 202, 1994. Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. arXiv preprint arXiv:1509.06461, 2015. van Seijen, H., Rupam Mahmood, A., Pilarski, P. M., Machado, M. C., and Sutton, R. S.
1602.01783#44
1602.01783#46
1602.01783
[ "1509.02971" ]
1602.01783#46
Asynchronous Methods for Deep Reinforcement Learning
True Online Temporal-Difference Learning. ArXiv e-prints, Decem- ber 2015. Wang, Z., de Freitas, N., and Lanctot, M. Dueling Network Architectures for Deep Reinforcement Learning. ArXiv e-prints, November 2015. Watkins, Christopher John Cornish Hellaby. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Williams, R.J. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Ma- chine Learning, 8(3):229â 256, 1992. Williams, Ronald J and Peng, Jing. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â 268, 1991. Wymann, B., EspiÃ
1602.01783#45
1602.01783#47
1602.01783
[ "1509.02971" ]
1602.01783#47
Asynchronous Methods for Deep Reinforcement Learning
lâ , E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs: The open racing car simulator, v1.3.5, 2013. # Supplementary Material for "Asynchronous Methods for Deep Reinforcement Learning" # November 7, 2021 # 7. Optimization Details We investigated two different optimization algorithms with our asynchronous framework â stochastic gradient descent and RMSProp. Our implementations of these algorithms do not use any locking in order to maximize throughput when using a large number of threads. Momentum SGD: The implementation of SGD in an asynchronous setting is relatively straightforward and well studied (Recht et al., 2011). Let θ be the parameter vector that is shared across all threads and let â θi be the accumulated gradients of the loss with respect to parameters θ computed by thread number i. Each thread i independently applies the standard momentum SGD update mi = αmi + (1 â α)â θi followed by θ â θ â ηmi with learning rate η, momentum α and without any locks. Note that in this setting, each thread maintains its own separate gradient and momentum vector. RMSProp: While RMSProp (Tieleman & Hinton, 2012) has been widely used in the deep learning literature, it has not been extensively studied in the asynchronous optimization setting. The standard non-centered RMSProp update is given by
1602.01783#46
1602.01783#48
1602.01783
[ "1509.02971" ]
1602.01783#48
Asynchronous Methods for Deep Reinforcement Learning
g = αg + (1 â α)â θ2 (S2) A@ 9 O~ nT (S3) where all operations are performed elementwise. In order to apply RMSProp in the asynchronous optimiza- tion setting one must decide whether the moving average of elementwise squared gradients g is shared or per-thread. We experimented with two versions of the algorithm. In one version, which we refer to as RM- SProp, each thread maintains its own g shown in Equation S2. In the other version, which we call Shared RMSProp, the vector g is shared among threads and is updated asynchronously and without locking. Sharing statistics among threads also reduces memory requirements by using one fewer copy of the parameter vector per thread. We compared these three asynchronous optimization algorithms in terms of their sensitivity to different learn- ing rates and random network initializations. Figure S5 shows a comparison of the methods for two different reinforcement learning methods (Async n-step Q and Async Advantage Actor-Critic) on four different games (Breakout, Beamrider, Seaquest and Space Invaders). Each curve shows the scores for 50 experiments that correspond to 50 different random learning rates and initializations. The x-axis shows the rank of the model after sorting in descending order by ï¬ nal average score and the y-axis shows the ï¬ nal average score achieved by the corresponding model. In this representation, the algorithm that performs better would achieve higher maximum rewards on the y-axis and the algorithm that is most robust would have its slope closest to horizon- tal, thus maximizing the area under the curve. RMSProp with shared statistics tends to be more robust than RMSProp with per-thread statistics, which is in turn more robust than Momentum SGD. Asynchronous Methods for Deep Reinforcement Learning
1602.01783#47
1602.01783#49
1602.01783
[ "1509.02971" ]
1602.01783#49
Asynchronous Methods for Deep Reinforcement Learning
# 8. Experimental Setup The experiments performed on a subset of Atari games (Figures 1, 3, 4 and Table 2) as well as the TORCS experiments (Figure S6) used the following setup. Each experiment used 16 actor-learner threads running on a single machine and no GPUs. All methods performed updates after every 5 actions (tmax = 5 and IU pdate = 5) and shared RMSProp was used for optimization. The three asynchronous value-based methods used a shared target network that was updated every 40000 frames. The Atari experiments used the same input preprocessing as (Mnih et al., 2015) and an action repeat of 4. The agents used the network architecture from (Mnih et al., 2013). The network used a convolutional layer with 16 ï¬
1602.01783#48
1602.01783#50
1602.01783
[ "1509.02971" ]
1602.01783#50
Asynchronous Methods for Deep Reinforcement Learning
lters of size 8 à 8 with stride 4, followed by a convolutional layer with with 32 ï¬ lters of size 4 à 4 with stride 2, followed by a fully connected layer with 256 hidden units. All three hidden layers were followed by a rectiï¬ er nonlinearity. The value-based methods had a single linear output unit for each action representing the action-value. The model used by actor-critic agents had two set of outputs â a softmax output with one entry per action representing the probability of selecting the action, and a single linear output representing the value function. All experiments used a discount of γ = 0.99 and an RMSProp decay factor of α = 0.99. The value based methods sampled the exploration rate â ¬ from a distribution taking three values â ¬1, â ¬2, â ¬; with probabilities 0.4, 0.3, 0.3. The values of â ¬1,â ¬2,â ¬3 were annealed from 1 to 0.1,0.01,0.5 respectively over the first four million frames. Advantage actor-critic used entropy regularization with a weight 8 = 0.01 for all Atari and TORCS experiments. We performed a set of 50 experiments for five Atari games and every TORCS level, each using a different random initialization and initial learning rate. The initial learning rate was sampled from a LogUniform(10~4, 10-7) distribution and annealed to 0 over the course of training. Note that in comparisons to prior work (Tables 1 and S3) we followed standard evaluation protocol and used fixed hyperparameters.
1602.01783#49
1602.01783#51
1602.01783
[ "1509.02971" ]
1602.01783#51
Asynchronous Methods for Deep Reinforcement Learning
# 9. Continuous Action Control Using the MuJoCo Physics Simulator To apply the asynchronous advantage actor-critic algorithm to the Mujoco tasks the necessary setup is nearly identical to that used in the discrete action domains, so here we enumerate only the differences required for the continuous action domains. The essential elements for many of the tasks (i.e. the physics models and task objectives) are near identical to the tasks examined in (Lillicrap et al., 2015). However, the rewards and thus performance are not comparable for most of the tasks due to changes made by the developers of Mujoco which altered the contact model. For all the domains we attempted to learn the task using the physical state as input. The physical state consisted of the joint positions and velocities as well as the target position if the task required a target. In addition, for three of the tasks (pendulum, pointmass2D, and gripper) we also examined training directly from RGB pixel inputs. In the low dimensional physical state case, the inputs are mapped to a hidden state using one hidden layer with 200 ReLU units. In the cases where we used pixels, the input was passed through two layers of spatial convolutions without any non-linearity or pooling. In either case, the output of the encoder layers were fed to a single layer of 128 LSTM cells. The most important difference in the architecture is in the the output layer of the policy network. Unlike the discrete action domain where the action output is a Softmax, here the two outputs of the policy network are two real number vectors which we treat as the mean vector µ and scalar variance Ï
1602.01783#50
1602.01783#52
1602.01783
[ "1509.02971" ]
1602.01783#52
Asynchronous Methods for Deep Reinforcement Learning
2 of a multidimensional normal distribution with a spherical covariance. To act, the input is passed through the model to the output layer where we sample from the normal distribution determined by µ and Ï 2. In practice, µ is modeled by a linear layer and Ï 2 by a SoftPlus operation, log(1 + exp(x)), as the activation computed as a function of the output of a linear layer. In our experiments with continuous control problems the networks for policy network and value network do not share any parameters, though this detail is unlikely to be crucial. Finally, since the episodes were typically at most several hundred time steps long, we did not use any bootstrapping in the policy or value function updates and batched each episode into a single update. As in the discrete action case, we included an entropy cost which encouraged exploration. In the continuous Asynchronous Methods for Deep Reinforcement Learning
1602.01783#51
1602.01783#53
1602.01783
[ "1509.02971" ]
1602.01783#53
Asynchronous Methods for Deep Reinforcement Learning
case the we used a cost on the differential entropy of the normal distribution deï¬ ned by the output of the actor network, â 1 2 (log(2Ï Ï 2) + 1), we used a constant multiplier of 10â 4 for this cost across all of the tasks examined. The asynchronous advantage actor-critic algorithm ï¬ nds solutions for all the domains. Figure S8 shows learning curves against wall-clock time, and demonstrates that most of the domains from states can be solved within a few hours. All of the experiments, including those done from pixel based observations, were run on CPU. Even in the case of solving the domains directly from pixel inputs we found that it was possible to reliably discover solutions within 24 hours. Figure S7 shows scatter plots of the top scores against the sampled learning rates. In most of the domains there is large range of learning rates that consistently achieve good performance on the task. # Algorithm S2 Asynchronous n-step Q-learning - pseudocode for each actor-learner thread. // Assume global shared parameter vector 0. // Assume global shared target parameter vector 0~ . // Assume global shared counter T = 0. Initialize thread step counter t <- 1 Initialize target network parameters 0~ < 0 Initialize thread-specific parameters 6â = 0 Initialize network gradients dO ~ 0 repeat Clear gradients d@ + 0 Synchronize thread-specific parameters 0â = 0 Estar t= t Get state s; repeat Take action a, according to the e-greedy policy based on Q(s+, a; 6â ) Receive reward r; and new state 5441 t<et+l1 TeT+1 until terminal s; or t â tstart == tmazx _f 0 for terminal s; R= maxa Q(s:,4;07 ) for non-terminal s; fori â ¬ {tâ 1,...,tstare} do Rerit+yR > Accumulate gradients wrt 6â : d@ â d@ + (R= OCsi,0550"))" end for Perform asynchronous update of @ using d0. ifT mod Itarget == 0 then a +80 end if until T > Tinax # until T > Tmax Asynchronous Methods for Deep Reinforcement Learning # Algorithm S3 Asynchronous advantage actor-critic - pseudocode for each actor-learner thread.
1602.01783#52
1602.01783#54
1602.01783
[ "1509.02971" ]
1602.01783#54
Asynchronous Methods for Deep Reinforcement Learning
// Assume global shared parameter vectors 0 and 0, and global shared counter T = 0 // Assume thread-specific parameter vectors 0' and 67, Initialize thread step counter t <- 1 repeat Reset gradients: d@ < 0 and d6,, < 0. Synchronize thread-specific parameters 6â = 0 and 6/, = 0, Estar t= t Get state s; repeat Perform a; according to policy 7(az|s1; 6â ) Receive reward r; and new state 5,41 t<t+l1 TeT+1 until terminal sz or t â tstart == tmazx R= 0 for terminal s; ~ ) V(se,0) for non-terminal s;// Bootstrap from last state for i â ¬ {tâ - +s tstarte} do Rern+yR Accumulate gradients wrt 0â : d0 <â d@ + Vor log m(ai|si; 6â )(R â V(si; 0)) Accumulate gradients wrt 6,: d0,, â dO, + O(R â V(si;64,))â /00, end for Perform asynchronous update of @ using dé and of @y using d6v. until T until T > Tmax Asynchronous Methods for Deep Reinforcement Learning
1602.01783#53
1602.01783#55
1602.01783
[ "1509.02971" ]
1602.01783#55
Asynchronous Methods for Deep Reinforcement Learning
seo â step Q, SD Tatas 0, RMSProp 2 peste 0, AMSProp rates 0 Shares RMSProp rte 0, Shares RuSProp step 0,560 2 peste 0, AMSPop reste 0, shared RMS Prop 2 psten 0, AMSPrep â nsten Shares RSProp Figure S5. Comparison of three different optimization methods (Momentum SGD, RMSProp, Shared RMSProp) tested using two different algorithms (Async n-step Q and Async Advantage Actor-Critic) on four different Atari games (Break- out, Beamrider, Seaquest and Space Invaders). Each curve shows the ï¬ nal scores for 50 experiments sorted in descending order that covers a search over 50 random initializations and learning rates. The top row shows results using Async n-step Q algorithm and bottom row shows results with Async Advantage Actor-Critic. Each individual graph shows results for one of the four games and three different optimization methods. Shared RMSProp tends to be more robust to different learning rates and random initializations than Momentum SGD and RMSProp without sharing. S000 Slow car, no bots S000 Slow car, bots 4000 4000 3000 3000 $2000 $2000 1000 Async 1-step Q 1000 Async 1-step Q Async SARSA Async Async actor-critic sync actor-critic â
1602.01783#54
1602.01783#56
1602.01783
[ "1509.02971" ]
1602.01783#56
Asynchronous Methods for Deep Reinforcement Learning
â Async SARSA â Asyne n-step Q = Human tester Human tester 1000 -1000 0 10 20 30 40 0 10 20 30 40 Training time (hours) Training time (hours) 6000 Fast car, no bots 6000 Fast car, bots 5000 5000 4000 4000 y 3000 y 3000 * 2000 * 2000 Async L-step Q Async L-step Q 1000 1000 â Async SARSA Async n-step Q Async actor-critic Human tester â Async SARSA. Async n-step Q Async actor-critic Human tester 10 20 30 40 Training time (hours) 10 20 Training time (hours) 40 Figure S6. Comparison of algorithms on the TORCS car racing simulator. Four different conï¬ gurations of car speed and opponent presence or absence are shown. In each plot, all four algorithms (one-step Q, one-step Sarsa, n-step Q and Advantage Actor-Critic) are compared on score vs training time in wall clock hours. Multi-step algorithms achieve better policies much faster than one-step algorithms on all four levels. The curves show averages over the 5 best runs from 50 experiments with learning rates sampled from LogU nif orm(10â 4, 10â 2) and all other hyperparameters ï¬ xed.
1602.01783#55
1602.01783#57
1602.01783
[ "1509.02971" ]
1602.01783#57
Asynchronous Methods for Deep Reinforcement Learning
Asynchronous Methods for Deep Reinforcement Learning Figure S7. Performance for the Mujoco continuous action domains. Scatter plot of the best score obtained against learning rates sampled from LogU nif orm(10â 5, 10â 1). For nearly all of the tasks there is a wide range of learning rates that lead to good performance on the task. Asynchronous Methods for Deep Reinforcement Learning Figure S8. Score per episode vs wall-clock time plots for the Mujoco domains. Each plot shows error bars for the top 5 experiments. Figure S9. Data efï¬ ciency comparison of different numbers of actor-learners one-step Sarsa on ï¬
1602.01783#56
1602.01783#58
1602.01783
[ "1509.02971" ]
1602.01783#58
Asynchronous Methods for Deep Reinforcement Learning
ve Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows increased data efï¬ ciency with increased numbers of parallel workers. Asynchronous Methods for Deep Reinforcement Learning Figure S10. Training speed comparison of different numbers of actor-learners for all one-step Sarsa on ï¬
1602.01783#57
1602.01783#59
1602.01783
[ "1509.02971" ]
1602.01783#59
Asynchronous Methods for Deep Reinforcement Learning
ve Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows signiï¬ cant speedups from using greater numbers of parallel actor-learners. 1000 1 step 0.Beamide wo step 9, Bretaut so step 0. Pang sovo 1atep 0, omer Figure S11. Scatter plots of scores obtained by one-step Q, one-step Sarsa, and n-step Q on ï¬ ve games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. All algorithms exhibit some level of robustness to the choice of learning rate. Asynchronous Methods for Deep Reinforcement Learning
1602.01783#58
1602.01783#60
1602.01783
[ "1509.02971" ]
1602.01783#60
Asynchronous Methods for Deep Reinforcement Learning
DQN 570.2 133.4 3332.3 124.5 697.1 76108.0 176.3 17560.0 8672.4 41.2 25.8 303.9 3773.1 3046.0 50992.0 12835.2 -21.6 475.6 -2.3 25.8 157.4 2731.8 216.5 12952.5 -3.8 348.5 2696.0 3864.0 11875.0 50.0 763.5 5439.9 16.2 298.2 4589.8 4065.3 9264.0 58.5 2793.9 1449.7 34081.0 -2.3 5640.0 32.4 3311.3 54.0 20228.1 246.0 Gorila 813.5 189.2 1195.8 3324.7 933.6 629166.5 399.4 19938.0 3822.1 54.0 74.2 313.0 6296.9 3191.8 65451.0 14880.1 -11.3 71.0 4.6 10.2 426.6 4373.0 538.4 8963.4 -1.7 444.0 1431.0 6363.1 20620.0 84.0 1263.0 9238.5 16.7 2598.6 7089.8 5310.3 43079.8 61.8 10145.9 1183.3 14919.2 -0.7 8267.8 118.5 8747.7 523.4 112093.4 10431.0 Game Alien Amidar Assault Asterix Asteroids Atlantis Bank Heist Battle Zone Beam Rider Berzerk Bowling Boxing Breakout Centipede Chopper Comman Crazy Climber Defender Demon Attack Double Dunk Enduro Fishing Derby Freeway Frostbite Gopher Gravitar H.E.R.O.
1602.01783#59
1602.01783#61
1602.01783
[ "1509.02971" ]
1602.01783#61
Asynchronous Methods for Deep Reinforcement Learning
Ice Hockey James Bond Kangaroo Krull Kung-Fu Master Montezumaâ s Revenge Ms. Pacman Name This Game Phoenix Pit Fall Pong Private Eye Q*Bert River Raid Road Runner Robotank Seaquest Skiing Solaris Space Invaders Star Gunner Surround Tennis Time Pilot Tutankham Up and Down Venture Video Pinball Wizard of Wor Yars Revenge Zaxxon Double 1033.4 169.1 6060.8 16837.0 1193.2 319688.0 886.0 24740.0 17417.2 1011.1 69.6 73.5 368.9 3853.5 3495.0 113782.0 27510.0 69803.4 -0.3 1216.6 3.2 28.8 1448.1 15253.0 200.5 14892.5 -2.5 573.0 11204.0 6796.1 30207.0 42.0 1241.3 8960.3 12366.5 -186.7 19.1 -575.5 11020.8 10838.4 43156.0 59.1 14498.0 -11490.4 810.0 2628.7 58365.0 1.9 -7.8 6608.0 92.2 19086.9 21.0 367823.7 6201.0 6270.6 8593.0
1602.01783#60
1602.01783#62
1602.01783
[ "1509.02971" ]
1602.01783#62
Asynchronous Methods for Deep Reinforcement Learning
Dueling 1486.5 172.7 3994.8 15840.0 2035.4 445360.0 1129.3 31320.0 14591.3 910.6 65.7 77.3 411.6 4881.0 3784.0 124566.0 33996.0 56322.8 -0.8 2077.4 -4.1 0.2 2332.4 20051.4 297.0 15207.9 -1.3 835.5 10334.0 8051.6 24288.0 22.0 2250.6 11185.1 20410.5 -46.9 18.8 292.6 14175.8 16569.4 58549.0 62.0 37361.6 -11928.0 1768.4 5993.1 90804.0 4.0 4.4 6601.0 48.0 24759.2 200.0 110976.2 7054.0 25976.5 10164.0
1602.01783#61
1602.01783#63
1602.01783
[ "1509.02971" ]
1602.01783#63
Asynchronous Methods for Deep Reinforcement Learning
Prioritized 900.5 218.4 7748.5 31907.5 1654.0 593642.0 816.8 29100.0 26172.7 1165.6 65.8 68.6 371.6 3421.9 6604.0 131086.0 21093.5 73185.8 2.7 1884.4 9.2 27.9 2930.2 57783.8 218.0 20506.4 -1.0 3511.5 10241.0 7406.5 31244.0 13.0 1824.6 11836.1 27430.1 -14.8 18.9 179.0 11277.0 18184.4 56990.0 55.4 39096.7 -10852.8 2238.2 9063.0 51959.0 -0.9 -2.0 7448.0 33.6 29443.7 244.0 374886.9 7451.0 5965.1 9501.0
1602.01783#62
1602.01783#64
1602.01783
[ "1509.02971" ]
1602.01783#64
Asynchronous Methods for Deep Reinforcement Learning
A3C FF, 1 day 182.1 283.9 3746.1 6723.0 3009.4 772392.0 946.0 11340.0 13235.9 1433.4 36.2 33.7 551.6 3306.5 4669.0 101624.0 36242.5 84997.5 0.1 -82.2 13.6 0.1 180.1 8442.8 269.5 28765.8 -4.7 351.5 106.0 8066.6 3046.0 53.0 594.4 5614.0 28181.8 -123.0 11.4 194.4 13752.3 10001.2 31769.0 2.3 2300.2 -13700.0 1884.8 2214.7 64393.0 -9.6 -10.2 5825.0 26.1 54525.4 19.0 185852.6 5278.0 7270.8 2659.0
1602.01783#63
1602.01783#65
1602.01783
[ "1509.02971" ]
1602.01783#65
Asynchronous Methods for Deep Reinforcement Learning
A3C FF 518.4 263.9 5474.9 22140.5 4474.5 911091.0 970.1 12950.0 22707.9 817.9 35.1 59.8 681.9 3755.8 7021.0 112646.0 56533.0 113308.4 -0.1 -82.5 18.8 0.1 190.5 10022.8 303.5 32464.1 -2.8 541.0 94.0 5560.0 28819.0 67.0 653.7 10476.1 52894.1 -78.5 5.6 206.9 15148.8 12201.8 34216.0 32.8 2355.4 -10911.1 1956.0 15730.5 138218.0 -9.7 -6.3 12679.0 156.3 74705.7 23.0 331628.1 17244.0 7157.5 24622.0
1602.01783#64
1602.01783#66
1602.01783
[ "1509.02971" ]
1602.01783#66
Asynchronous Methods for Deep Reinforcement Learning
A3C LSTM 945.3 173.0 14497.9 17244.5 5093.1 875822.0 932.8 20760.0 24622.2 862.2 41.8 37.3 766.8 1997.0 10150.0 138518.0 233021.5 115201.9 0.1 -82.5 22.6 0.1 197.6 17106.8 320.0 28889.5 -1.7 613.0 125.0 5911.4 40835.0 41.0 850.7 12093.7 74786.7 -135.7 10.7 421.1 21307.5 6591.9 73949.0 2.6 1326.1 -14863.8 1936.4 23846.0 164766.0 -8.3 -6.4 27202.0 144.2 105728.7 25.0 470310.5 18082.0 5615.5 23519.0 831.0 6159.4 Table S3. Raw scores for the human start condition (30 minutes emulator time). DQN scores taken from (Nair et al., 2015). Double DQN scores taken from (Van Hasselt et al., 2015), Dueling scores from (Wang et al., 2015) and Prioritized scores taken from (Schaul et al., 2015)
1602.01783#65
1602.01783
[ "1509.02971" ]
1602.01137#0
A Dual Embedding Space Model for Document Ranking
6 1 0 2 b e F 2 ] R I . s c [ 1 v 7 3 1 1 0 . 2 0 6 1 : v i X r a # A Dual Embedding Space Model for Document Ranking Bhaskar Mitra Microsoft Cambridge, UK [email protected] Eric Nalisnick University of California Irvine, USA [email protected] Nick Craswell, Rich Caruana Microsoft Redmond, USA nickcr, [email protected]
1602.01137#1
1602.01137
[ "1510.02675" ]
1602.01137#1
A Dual Embedding Space Model for Document Ranking
ABSTRACT A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difï¬ cult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs. We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re- rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we ï¬ nd the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features. Categories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.3 Information Search and Retrieval Keywords: Document ranking; Word embeddings; Word2vec Figure 1: A two dimensional PCA projection of the 200- dimensional embeddings.
1602.01137#0
1602.01137#2
1602.01137
[ "1510.02675" ]
1602.01137#2
A Dual Embedding Space Model for Document Ranking
Relevant documents are yellow, irrel- evant documents are grey, and the query is blue. To visualize the results of multiple queries at once, before dimensionality reduction we centre query vectors at the origin and represent documents as the difference between the document vector and its query vector. (a) uses IN word vector centroids to represent both the query and the documents. (b) uses IN for the queries and OUT for the documents, and seems to have a higher density of relevant documents near the query. # INTRODUCTION Identifying relevant documents for a given query is a core chal- lenge for Web search. For large-scale search engines, it is possible to identify a very small set of pages that can answer a good proportion of queries [2]. For such popular pages, clicks and hyperlinks may provide sufï¬ cient ranking evidence and it may not be important to match the query against the body text. However, in many Web search scenarios such query-content matching is crucial. If new content is available, the new and updated documents may not have click evidence or may have evidence that is out of date. For new or tail queries, there may be no memorized connections between the queries and the documents. Furthermore, many search engines and apps have a relatively smaller number of users, which limits their This paper is an extended evaluation and analysis of the model proposed by Nalisnick et al. [32] to appear in WWWâ
1602.01137#1
1602.01137#3
1602.01137
[ "1510.02675" ]
1602.01137#3
A Dual Embedding Space Model for Document Ranking
16, April 11 - 15, 2016, Montreal, Canada. Copyright 2016 by the author(s). ability to answer queries based on memorized clicks. There may even be insufï¬ cient behaviour data to learn a click-based embedding [18] or a translation model [10, 19]. In these cases it is crucial to model the relationship between the query and the document content, without click data. When considering the relevance of document body text to a query, the traditional approach is to count repetitions of query terms in the document. Different transformation and weighting schemes for those counts lead to a variety of possible TF-IDF ranking features. One theoretical basis for such features is the probabilistic model of information retrieval, which has yielded the very successful TF-IDF formulation BM25[35]. As noted by Robertson [34], the probabilis- tic approach can be restricted to consider only the original query terms or it can automatically identify additional terms that are cor- related with relevance. However, the basic commonly-used form Table 1:
1602.01137#2
1602.01137#4
1602.01137
[ "1510.02675" ]
1602.01137#4
A Dual Embedding Space Model for Document Ranking
The nearest neighbours for the words "yale", "seahawks" and "eminem" according to the cosine similarity based on the IN-IN, OUT-OUT and IN-OUT vector comparisons for the different words in the vocabulary. These examples show that IN-IN and OUT-OUT cosine similarities are high for words that are similar by function or type (typical), and the IN-OUT cosine similarities are high between words that often co-occur in the same query or document (topical). The word2vec model used here was trained on a query corpus with a vocabulary of 2,748,230 words. IN-IN yale harvard nyu cornell tulane tufts yale OUT-OUT yale uconn harvard tulane nyu tufts IN-OUT yale faculty alumni orientation haven graduate IN-IN seahawks 49ers broncos packers nï¬ steelers seahawks OUT-OUT seahawks broncos 49ers nï¬ packers steelers IN-OUT seahawks highlights jerseys tshirts seattle hats IN-IN eminem rihanna ludacris kanye beyonce 2pac eminem OUT-OUT eminem rihanna dre kanye beyonce tupac IN-OUT eminem rap featuring tracklist diss performs
1602.01137#3
1602.01137#5
1602.01137
[ "1510.02675" ]
1602.01137#5
A Dual Embedding Space Model for Document Ranking
of BM25 considers query terms only, under the assumption that non-query terms are less useful for document ranking. In the probabilistic approach, the 2-Poisson model forms the ba- sis for counting term frequency [6, 15, 36]. The stated goal is to distinguish between a document that is about a term and a document that merely mentions that term. These two types of documents have term frequencies from two different Poisson distributions, such that documents about the term tend to have higher term frequency than those that merely mention it. This explanation for the relation- ship between term frequency and aboutness is the basis for the TF function in BM25 [36]. The new approach in this paper uses word occurrences as ev- idence of aboutness, as in the probabilistic approach. However, instead of considering term repetition as evidence of aboutness it considers the relationship between the query terms and all the terms in the document. For example, given a query term â
1602.01137#4
1602.01137#6
1602.01137
[ "1510.02675" ]
1602.01137#6
A Dual Embedding Space Model for Document Ranking
yaleâ , in addi- tion to considering the number of times Yale is mentioned in the document, we look at whether related terms occur in the document, such as â facultyâ and â alumniâ . Similarly, in a document about the Seahawks sports team one may expect to see the terms â highlightsâ and â jerseysâ . The occurrence of these related terms in sufï¬ cient numbers is a way to distinguish between documents that merely mention Yale or Seahawks and the documents that are about the university or about the sports team.
1602.01137#5
1602.01137#7
1602.01137
[ "1510.02675" ]
1602.01137#7
A Dual Embedding Space Model for Document Ranking
â ¢ We propose a document ranking feature based on comparing all the query words with all the document words, which is equivalent to comparing each query word to a centroid of the document word embeddings. â ¢ We analyse the positive aspects of the new feature, prefer- ring documents that contain many words related to the query words, but also note the potential of the feature to have false positive matches. â ¢ We empirically compare the new approach to a single em- bedding and the traditional word counting features. The new approach works well on its own in a telescoping setting, re- ranking the top documents returned by a commercial Web search engine, and in combination with word counting for a more general document retrieval task.
1602.01137#6
1602.01137#8
1602.01137
[ "1510.02675" ]
1602.01137#8
A Dual Embedding Space Model for Document Ranking
2. DISTRIBUTIONAL SEMANTICS FOR IR In this section we ï¬ rst introduce the Continuous Bag-of-Words (CBOW) model made popular by the software Word2Vec [28, 29]. Then, inspired by our ï¬ ndings that distinctly different topic-based relationships can be found by using both the input and the output embeddings jointly â the latter of which is usually discarded after training â we propose the Dual Embedding Space Model (DESM) for document ranking. With this motivation, in Section 2 we describe how the input and the output embedding spaces learned by a word2vec model may be jointly particularly attractive for modelling the aboutness aspect of document ranking.
1602.01137#7
1602.01137#9
1602.01137
[ "1510.02675" ]
1602.01137#9
A Dual Embedding Space Model for Document Ranking
Table 1 gives some anecdotal evidence of why this is true. If we look in the neighbourhood of the IN vector of the word â yaleâ then the other IN vectors that are close correspond to words that are functionally similar or of the same type, e.g., â harvardâ and â nyuâ . A similar pattern emerges if we look at the OUT vectors in the neighbourhood of the OUT vector of â yaleâ . On the other hand, if we look at the OUT vectors that are closest to the IN vector of â
1602.01137#8
1602.01137#10
1602.01137
[ "1510.02675" ]
1602.01137#10
A Dual Embedding Space Model for Document Ranking
yaleâ we ï¬ nd words like â facultyâ and â alumniâ . We use this property of the IN-OUT embeddings to propose a novel Dual Embedding Space Model (DESM) for document ranking. Figure 1 further illustrates how in this Dual Embedding Space model, using the IN embeddings for the query words and the OUT embeddings for the document words we get a much more useful similarity deï¬ nition between the query and the relevant document centroids. The main contributions of this paper are, # 2.1 Continuous Bag-of-Words While many word embedding models have been proposed re- cently, the Continuous Bag-of-Words (CBOW) and the Skip-Gram (SG) architectures proposed by Mikolov et al. [29] are arguably the most popular (perhaps due to the popularity of the software Word2Vec1, which implements both). Although here we will con- centrate exclusively on the CBOW model, our proposed IR ranking methodology is just as applicable to vectors produced by SG, as both models produce qualitatively and quantitatively similar embeddings. The CBOW model learns a wordâ s embedding via maximizing the log conditional probability of the word given the context words occurring within a ï¬ xed-sized window around that word. That is, the words in the context window serve as input, and from them, the model attempts to predict the center (missing) word.
1602.01137#9
1602.01137#11
1602.01137
[ "1510.02675" ]
1602.01137#11
A Dual Embedding Space Model for Document Ranking
For a formal deï¬ nition, let ck â Rd be a d-dimensional, real-valued vector representing the kth context word ck appearing in a K â 1- sized window around an instance of word wi, which is represented by a vector wi â Rd. The model â predictsâ word wi by adapting its representation vector such that it has a large inner-product with â ¢ A novel Dual Embedding Space Model, with one embedding for query words and a separate embedding for document words, learned jointly based on an unlabelled text corpus. 1https://code.google.com/p/word2vec/ Input Layer Output Layer [e) [e) e) Hidden Layer Oo [e) [e) e) [e) [e) [e) [e) [e) [e) oO e) [e) Figure 2: The architecture of a word2vec (CBOW) model con- sidering a single context word. WIN and WOU T are the two weight matrices learnt during training and corresponds to the IN and the OUT word embedding spaces of the model. the mean of the context word vectors. Training CBOW requires minimization of the following objective IDI Losow = S â log p(wilCx) - e) |D| & = 2018 ew ye eck , where aA 1 CK= K-1 S Ck (2) i-K<SkSi+K Hi
1602.01137#10
1602.01137#12
1602.01137
[ "1510.02675" ]
1602.01137#12
A Dual Embedding Space Model for Document Ranking
and D represents the training corpus. Notice that the probability is normalized by summing over all the vocabulary, which is quite costly when training on web-scale data. To make CBOW scalable, Mikolov et al. [29] proposed the following slightly altered negative sampling objective: N â log p(wi|Cx) © â logo(Cxwi) -S log o(â CxWn) (3) n=1 where Ï is the Sigmoid function and N is the number of negative sample words drawn either from the uniform or empirical distribu- tion over the vocabulary. All our experiments were performed with the negative sampling objective. A crucial detail often overlooked when using Word2Vec is that there are two different sets of vectors (represented above by c and w respectively and henceforth referred to as the IN and OUT em- bedding spaces), which correspond to the WIN and WOU T weight matrices in Figure 2. By default, Word2Vec discards WOU T at the end of training and outputs only WIN . Subsequent tasks deter- mine word-to-word semantic relatedness by computing the cosine similarity:
1602.01137#11
1602.01137#13
1602.01137
[ "1510.02675" ]
1602.01137#13
A Dual Embedding Space Model for Document Ranking
; cre; sim(ci, cj) = cos(ci, ej) = Tesliie I (4) aI Cj # 2.2 Dual Embedding Space Model A key challenge for term-matching based retrieval is to distin- guish whether a document merely references a term or is about that entity. See Figure 3 for a concrete example of two passages that contain the term "Albuquerque" an equal number of times although only one of the passages is about that entity. The presence of the words like "population" and "metropolitan" indicate that the left passage is about Albuquerque, whereas the passage on the right just mentions it. However, these passages would be indistinguishable under term counting. The semantic similarity of non-matched terms (i.e. the words a TF feature would overlook) are crucial for inferring a documentâ s topic of focusâ its aboutness. Due to its ability to capture word co-occurrence (i.e. perform missing word prediction), CBOW is a natural ï¬ t for modelling the aboutness of a document. The learnt embedding spaces contain use- ful knowledge about the distributional properties of words, allowing, in the case of Figure 3, an IR system to recognize the city-related terms in the left document.
1602.01137#12
1602.01137#14
1602.01137
[ "1510.02675" ]
1602.01137#14
A Dual Embedding Space Model for Document Ranking
With this motivation, we deï¬ ne a simple yet, as we will demonstrate, effective ranking function we call the Dual Embedding Space Model: DESM: D) (2.2)= 9 » jallDi Ta TT © where mL I 6 I (6) â ¬D dj â D Here D is the centroid of all the normalized vectors for the words in the document serving as a single embedding for the whole docu- ment. In this formulation of the DESM, the document embeddings can be pre-computed, and at the time of ranking, we only need to sum the score contributions across the query terms. We expect that the ability to pre-compute a single document embedding is a very useful property when considering runtime efï¬
1602.01137#13
1602.01137#15
1602.01137
[ "1510.02675" ]
1602.01137#15
A Dual Embedding Space Model for Document Ranking
ciency. IN-IN vs. IN-OUT. Hill et al. [16] noted, "Not all neural em- beddings are born equal". As previously mentioned, the CBOW (and SG) model contains two separate embedding spaces (IN and OUT) whose interactions capture additional distributional seman- tics of words that are not observable by considering any of the two embeddings spaces in isolation. Table 1 illustrates clearly how the CBOW model "pushes" the IN vectors of words closer to the OUT vectors of other words that they commonly co-occur with. In doing so, words that appear in similar contexts get pushed closer to each other within the IN embedding space (and also within the OUT embedding space). Therefore the IN-IN (or the OUT-OUT) cosine similarities are higher for words that are typically (by type or by function) similar, whereas the IN-OUT cosine similarities are higher for words that co-occur often in the training corpus (topically simi- lar). This gives us at least two variants of the DESM, corresponding to retrieval in the IN-OUT space or the IN-IN space2. T qiniDour =a val[Doorl fcg llaryallllDour| DESM,n-our(Q, D) dh din isDIn ilar lll/Drwl DESM,y-1n(Q, D) â a » uEeQ (8) 2It and DESMOU T â IN , but based on limited experimentation we expect them to behave similar to DESMIN â IN and DESMIN â OU T , respectively.
1602.01137#14
1602.01137#16
1602.01137
[ "1510.02675" ]
1602.01137#16
A Dual Embedding Space Model for Document Ranking
Albuquerque is the most populous city in the U.S. state of New Mexico. The high-altitude city serves as the county seat of Bernalillo County, and it is situated in the central part of the state, straddling the Rio Grande. The city population is 557,169 as of the July 1, 2014, population estimate from the United States Census Bureau, and ranks as the 32nd-largest city in the U.S. The Metropolitan Statistical Area (or MSA) has a population of 902,797 according to the United States Census Bureau's most recently available estimate for July 1, 2013. (a) Allen suggested that they could program a BASIC interpreter for the device; after a call from Gates claiming to have a working interpreter, MITS requested a demonstration.
1602.01137#15
1602.01137#17
1602.01137
[ "1510.02675" ]
1602.01137#17
A Dual Embedding Space Model for Document Ranking
Since they didn't actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, the interpreter worked flawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico in March 1975; MITS agreed to distribute it, marketing it as Altair BASIC. (b) Figure 3: Two different passages from Wikipedia that mentions "Albuquerque" (highlighted in orange) exactly once. Highlighted in green are all the words that have an IN-OUT similarity score with the word "Albuquerque" above a ï¬ xed threshold (we choose -0.03 for this visualization) and can be considered as providing supporting evidence that (a) is about Albuquerque, whereas (b) happens to only mention the city. In Section 4, we show that the DESMIN â OU T is a better indi- cation of aboutness than BM25, because of its knowledge of the word distributional properties, and DESMIN â IN , since topical similarity is a better indicator of aboutness than typical similarity. Modelling document aboutness. We perform a simple word perturbation analysis to illustrate how the DESM can collect evi- dence on document aboutness from both matched and non-matched terms in the document. In Table 2, we consider ï¬ ve small passages of text. The ï¬ rst three passages are about Cambridge, Oxford and giraffes respectively. The next two passages are generated by re- placing the word "giraffe" by the word "Cambridge" in the passage about giraffes, and vice versa.
1602.01137#16
1602.01137#18
1602.01137
[ "1510.02675" ]
1602.01137#18
A Dual Embedding Space Model for Document Ranking
We compute the DESMIN â OU T and the DESMIN â IN scores along with the term frequencies for each of these passages for the query term "cambridge". As expected, all three models score the passage about Cambridge highly. However, unlike the term fre- quency feature, the DESM seem robust towards keyword stufï¬ ng3, at least in this speciï¬ c example where we replace the word "giraffe" with "cambridge" in the passage about giraffes, but the DESMs still score the passage relatively low. This is exactly the kind of evidence that we expect the DESM to capture that may not be possible by simple term counting. focusing at ranking for top positions is in fact quite common and has been used by many recent studies (e.g., [10, 18]). Dot product vs. cosine similarity. In the DESM formulation (Equation 5) we compute the cosine similaritiy between every query word and the normalized document centroid. The use of cosine similarity (as opposed to, say, dot-product) is motivated by several factors. Firstly, much of the existing literature[28, 29] on CBOW and SG uses cosine similarity and normalized unit vectors (for per- forming vector algebra for word analogies). As the cosine similarity has been shown to perform well in practice in these embedding spaces we adopt the same strategy here.
1602.01137#17
1602.01137#19
1602.01137
[ "1510.02675" ]
1602.01137#19
A Dual Embedding Space Model for Document Ranking
A secondary justiï¬ cation can be drawn based on the observa- tions made by Wilson and Schakel [48] that the length of the non- normalized word vectors has a direct relation to the frequency of the word. In information retrieval (IR), it is well known that frequently occurring words are ineffective features for distinguishing relevant documents from irrelevant ones. The inverse-document frequency weighting is often used in IR to capture this effect. By normalizing the word vectors in the document before computing the document centroids, we are counteracting the extra inï¬ uence frequent words would have on the sum. On the other hand, both the DESMs score the passage about Oxford very highly. This is expected because both these passages contain many words that are likely to co-occur with the word "cam- bridge" in the training corpus. This implies that the DESM features are very susceptible to false positive matches and can only be used either in conjunction with other document ranking features, such as TF-IDF, or for re-ranking a smaller set of candidate documents already deemed at least somewhat relevant. This is similar to the tele- scoping evaluation setup described by Matveeva et al. [27], where multiple nested rankers are used to achieve better retrieval perfor- mance over a single ranker. At each stage of telescoping, a ranker is used to reduce the set of candidate documents that is passed on to the next. Improved performance is possible because the ranker that sees only top-scoring documents can specialize in handling such documents, for example by using different feature weights. In our experiments, we will see the DESM to be a poor standalone ranking signal on a larger set of documents, but performs signiï¬ cantly better against the BM25 and the LSA baselines once we reach a small high-quality candidate document set. This evaluation strategy of Training corpus. Our CBOW model is trained on a query cor- pus4 consisting of 618,644,170 queries and a vocabulary size of 2,748,230 words. The queries are sampled from Bingâ s large scale search logs from the period of August 19, 2014 to August 25, 2014. We repeat all our experiments using another CBOW model trained on a corpus of document body text with 341,787,174 distinct sen- tences sampled from the Bing search index and a corresponding vocabulary size of 5,108,278 words.
1602.01137#18
1602.01137#20
1602.01137
[ "1510.02675" ]
1602.01137#20
A Dual Embedding Space Model for Document Ranking
Empirical results on the perfor- mance of both the models are presented in Section 4. Out-of-vocabulary (OOV) words. One of the challenges of the embedding models is that they can only be applied to a ï¬ xed size vocabulary. It is possible to explore different strategies to deal with out-of-vocab (OOV) words in the Equation 5 5. But we leave this for future investigation and instead, in this paper, all the OOV words are ignored for computing the DESM score, but not for computing the TF-IDF feature, a potential advantage for the latter. 3https://en.wikipedia.org/wiki/Keyword_ stuffing 4We provide the IN and OUT word embeddings trained using word2vec on the Bing query corpus at http://research. microsoft.com/projects/DESM. 5In machine translation there are examples of interesting strategies to handle out-of-vocabulary words (e.g., [25])
1602.01137#19
1602.01137#21
1602.01137
[ "1510.02675" ]
1602.01137#21
A Dual Embedding Space Model for Document Ranking
Table 2: A word perturbation analysis to show how the DESM collects evidence on the aboutness of a document. The DESM models are more robust irrelevant terms. For example, when the word "giraffe" is replaced by the word "cambridge", the passage on giraffes is still scored low by the DESM for the query "cambridge" because it ï¬ nds low supporting evidence from the other words in the passage. However, the DESM confuses the passage about Oxford to be relevant for the query "cambridge" because it detects a high number of similar words in the passage that frequently co-occur with the word "Cambridge". Query: "cambridge" Passage type Passage about Cambridge Passage about Oxford Passage about giraffes Passage about giraffes, but the word "giraffe" is replaced by the word "Cam- bridge" Passage about Cambridge, but the word "Cam- bridge" is re- placed by the word "giraffe" Passage text The city of Cambridge is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Cambridge the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Cambridge became an important trading centre.
1602.01137#20
1602.01137#22
1602.01137
[ "1510.02675" ]
1602.01137#22
A Dual Embedding Space Model for Document Ranking
The ï¬ rst town charters were granted in the 12th century, although city status was not conferred until 1951. Oxford is a city in the South East region of England and the county town of Oxfordshire. With a population of 159,994 it is the 52nd largest city in the United Kingdom, and one of the fastest growing and most ethnically diverse. Oxford has a broad economic base. Its industries include motor manufacturing, education, publishing and a large number of information technology and science-based businesses, some being academic offshoots. The city is known worldwide as the home of the University of Oxford, the oldest university in the English-speaking world. Buildings in Oxford demonstrate examples of every English architectural period since the arrival of the Saxons, including the mid-18th-century Radcliffe Camera. Oxford is known as the city of dreaming spires, a term coined by poet Matthew Arnold.
1602.01137#21
1602.01137#23
1602.01137
[ "1510.02675" ]
1602.01137#23
A Dual Embedding Space Model for Document Ranking
The giraffe (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard-like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classiï¬ ed under the family Girafï¬ dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The giraffeâ
1602.01137#22
1602.01137#24
1602.01137
[ "1510.02675" ]
1602.01137#24
A Dual Embedding Space Model for Document Ranking
s scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannas, grasslands, and open woodlands. The cambridge (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard- like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns.
1602.01137#23
1602.01137#25
1602.01137
[ "1510.02675" ]
1602.01137#25
A Dual Embedding Space Model for Document Ranking
It is classiï¬ ed under the family Girafï¬ dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The cambridgeâ s scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. giraffes usually inhabit savannas, grasslands, and open woodlands. The city of Giraffe is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Giraffe the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Giraffe became an important trading centre.
1602.01137#24
1602.01137#26
1602.01137
[ "1510.02675" ]
1602.01137#26
A Dual Embedding Space Model for Document Ranking
The ï¬ rst town charters were granted in the 12th century, although city status was not conferred until 1951. DESM (IN-OUT) Score -0.062 -0.070 -0.102 -0.094 -0.076 DESM (IN-IN) Score 0.120 0.107 0.011 0.033 0.088 Term Frequency Count 5 0 0 3 0 Document length normalization. In Equation 5 we normal- ize the scores linearly by both the query and the document lengths. While more sophisticated length normalization strategies, such as pivoted document length normalization [43], are reasonable, we leave this also for future work. such as BM25, for the non-telescoping evaluation setup described in Section 3.2.
1602.01137#25
1602.01137#27
1602.01137
[ "1510.02675" ]
1602.01137#27
A Dual Embedding Space Model for Document Ranking
We deï¬ ne the mixture model MM(Q, D) as, M M (Q, D) = αDESM (Q, D) + (1 â α)BM 25(Q, D) α â R, 0 â ¤ α â ¤ 1 # 2.3 The Mixture Model The DESM is a weak ranker and while it models some important aspects of document ranking, our experiments will show that itâ s effective only at ranking at high positions (i.e. documents we already know are at least somewhat relevant). We are inspired by previous work in neural language models, for example by Bengio et al. [4], which demonstrates that combining a neural model for predicting the next word with a more traditional counting-based language model is effective because the two models make different kinds of mistakes. Adopting a similar strategy we propose a simple and intuitive mixture model combining DESM with a term based feature, To choose the appropriate value for α, we perform a parameter sweep between zero and one at intervals of 0.01 on the implicit feedback based training set described in Section 3.1.
1602.01137#26
1602.01137#28
1602.01137
[ "1510.02675" ]
1602.01137#28
A Dual Embedding Space Model for Document Ranking
# 3. EXPERIMENTS We compare the retrieval performance of DESM against BM25, a traditional count-based method, and Latent Semantic Analysis (LSA), a traditional vector-based method. We conduct our eval- uations on two different test sets (explicit and implicit relevance judgements) and under two different experimental conditions (a large collection of documents and a telescoped subset). Table 3: NDCG results comparing the DESMIN â OU T with the BM25 and the LSA baselines. The DESMIN â OU T performs signiï¬
1602.01137#27
1602.01137#29
1602.01137
[ "1510.02675" ]
1602.01137#29
A Dual Embedding Space Model for Document Ranking
cantly better than both the BM25 and the LSA baselines at all rank positions. It also performs better than the DESMIN â IN on both the evaluation sets. The DESMs using embeddings trained on the query corpus also performs better than if trained on document body text. The highest NDCG values for every column is highlighted in bold and all the statistically signiï¬ cant (p < 0.05) differences over the BM25 baseline are marked with the asterisk (*). Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) 23.69 22.41* 23.59 23.75 24.06 25.02* 29.14 28.25* 29.59 29.72 30.32* 31.14* 44.77 44.24* 45.51* 46.36* 46.57* 47.89* 13.65 16.35* 18.62* 18.37* 19.67* 20.66* 27.41 31.75* 33.80* 35.18* 35.53* 37.34* 49.26 52.05* 53.32* 54.20* 54.13* 55.84* # 3.1 Datasets All the datasets that are used for this study are sampled from Bingâ s large scale query logs. The body text for all the candidate documents are extracted from Bingâ s document index. Explicitly judged test set. This evaluation set consists of 7,741 queries randomly sampled from Bingâ s query logs from the period of October, 2014 to December, 2014. For each sampled query, a set of candidate documents is constructed by retrieving the top results from Bing over multiple scrapes during a period of a few months.
1602.01137#28
1602.01137#30
1602.01137
[ "1510.02675" ]
1602.01137#30
A Dual Embedding Space Model for Document Ranking
In total the ï¬ nal evaluation set contains 171,302 unique documents across all queries which are then judged by human evaluators on a ï¬ ve point relevance scale (Perfect, Excellent, Good, Fair and Bad). In our non-telescoped experiment, we consider every distinct document in the test set as a candidate for every query in the same dataset. This setup is more in line with the traditional IR evaluation methodologies, where the model needs to retrieve the most relevant documents from a single large document collection. Our empirical results in Section 4 will show that the DESM model is a strong re-ranking signal, but as a standalone ranker, it is prone to false positives. Yet, when we mix our neural model (DESM) with a counting based model (BM25), good performance is achieved. For all the experiments we report the normalized discounted cumulative gain (NDCG) at different rank positions as a measure of performance for the different models under study. # 3.3 Baseline models Implicit feedback based test set. This dataset is sampled from the Bing logs from the period of the September 22, 2014 to September 28, 2014. The dataset consists of the search queries submitted by the user and the corresponding documents that were returned by the search engine in response. The documents are associated with a binary relevance judgment based on whether the document was clicked by the user. This test set contains 7,477 queries and the 42,573 distinct documents. We compare the DESM models to a term-matching based baseline, in BM25, and a vector space model baseline, in Latent Semantic Analysis (LSA)[8]. For the BM25 baseline we use the values of 1.7 for the k1 parameter and 0.95 for the b parameter based on a parameter sweep on the implicit feedback based training set. The LSA model is trained on the body text of 366,470 randomly sampled documents from Bingâ s index with a vocabulary size of 480,608 words. Note that unlike the word2vec models that train on word co-occurrence data, the LSA model by default trains on a word- document matrix. Implicit feedback based training set.
1602.01137#29
1602.01137#31
1602.01137
[ "1510.02675" ]
1602.01137#31
A Dual Embedding Space Model for Document Ranking
This dataset is sam- pled exactly the same way as the previous test but from the period of September 15, 2014 to September 21, 2014 and has 7,429 queries and 42,253 distinct documents. This set is used for tuning the parameters for the BM25 baseline and the mixture model. # 3.2 Experiment Setup We perform two distinct sets of evaluations for all the experimen- tal and baseline models. In the ï¬ rst experiment, we consider all documents retrieved by Bing (from the online scrapes in the case of the explicitly judged set or as recorded in the search logs in the case of the implicit feedback based sets) as the candidate set of documents to be re-ranked for each query. The fact that each of the documents were retrieved by the search engine implies that they are all at least marginally relevant to the query. Therefore, this experi- mental design isolates performance at the top ranks. As mentioned in Section 2.2, there is a parallel between this experiment setup and the telescoping [27] evaluation strategy, and has been used often in recent literature (e.g., [18, 41]). Note that by having a strong retrieval model, in the form of the Bing search engine, for ï¬ rst stage retrieval enables us to have a high conï¬ dence candidate set and in turn ensures reliable comparison with the baseline BM25 feature. # 4. RESULTS Table 3 shows the NCDG based performance evaluations un- der the telescoping setup. On both the explicitly judged and the implicit feedback based test sets the DESMIN â OU T performs sig- niï¬ cantly better than the BM25 and the LSA baselines, as well as the DESMIN â IN model. Under the all documents as candidates setup in Table 4, however, the DESMs (both IN-IN and IN-OUT) are clearly seen to not perform well as standalone document rankers. The mixture of DESMIN â OU T (trained on queries) and BM25 rectiï¬ es this problem and gives the best NDCG result under the non-telescoping settings and demonstrates a statistically signiï¬ cant improvement over the BM25 baseline. Figure 4 illustrates that the DESMIN â OU T is the most discrimi- nating feature for the relevant and the irrelevant documents retrieved by a ï¬ rst stage retrieval system.
1602.01137#30
1602.01137#32
1602.01137
[ "1510.02675" ]
1602.01137#32
A Dual Embedding Space Model for Document Ranking
However, BM25 is clearly superior in separating out the random irrelevant documents in the candidate set. The mixture model, unsurprisingly, has the good properties from both the DESMIN â OU T and the BM25 models. Figure 5 shows the joint distribution of the scores from the different models which further reinforces these points and shows that the DESM and the BM25 models make different errors. Table 4: Results of NDCG evaluations under the non-telescoping settings. Both the DESM and the LSA models perform poorly in the presence of random irrelevant documents in the candidate set. The mixture of DESMIN â OU T with BM25 achieves the best NDCG. The best NDCG values are highlighted per column in bold and all the statistically signiï¬ cant (p < 0.05) differences with the BM25 baseline are indicated by the asterisk (*)
1602.01137#31
1602.01137#33
1602.01137
[ "1510.02675" ]
1602.01137#33
A Dual Embedding Space Model for Document Ranking
Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) BM25 + DESM (IN-IN, trained on body text) BM25 + DESM (IN-IN, trained on queries) BM25 + DESM (IN-OUT, trained on body text) BM25 + DESM (IN-OUT, trained on queries) 21.44 04.61* 06.69* 05.56* 01.01* 00.62* 21.53 21.58 21.47 21.54 26.09 04.63* 06.80* 05.59* 01.16* 00.58* 26.16 26.20 26.18 26.42* 37.53 04.83* 07.39* 06.03* 01.58* 00.81* 37.48 37.62 37.55 37.86* 11.68 01.97* 03.39* 02.62* 00.78* 00.29* 11.96 11.91 11.83 12.22* 22.14 03.24* 05.09* 04.06* 01.12* 00.39* 22.58* 22.47* 22.42* 22.96* 33.19 04.54* 07.13* 05.92* 02.07* 01.36* 33.70* 33.72* 33.60* 34.11* We do not report the results of evaluating the mixture models under the telescoping setup because tuning the α parameter under those settings on the training set results in the best performance from the standalone DESM models. Overall, we conclude that the DESM is primarily suited for ranking at top positions or in conjunction with other document ranking features.
1602.01137#32
1602.01137#34
1602.01137
[ "1510.02675" ]
1602.01137#34
A Dual Embedding Space Model for Document Ranking
Interestingly, under the telescoping settings, the LSA baseline also shows some (albeit small) improvement over the BM25 baseline on the implicit feedback based test set but a loss on the explicitly judged test set. With respect to the CBOWâ s training data, the DESM models with the embeddings trained on the query corpus performs signiï¬ cantly better than the models trained on document body text across different conï¬ gurations. We have a plausible hypothesis on why this happens. Users tend to choose the most signiï¬
1602.01137#33
1602.01137#35
1602.01137
[ "1510.02675" ]