id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1602.02867#28
Value Iteration Networks
A red-colored particle needs to be navigated to a green goal us- ing horizontal and vertical forces. Gray-colored obstacles are randomly positioned in the domain, and apply an elastic force and friction when contacted. This domain presents a non-trivial control problem, as the agent needs to both plan a feasible trajectory between the obstacles (or use them to bounce off), but also control the particle (which has mass and inertia) to follow it. The state obser- vation consists of the particleâ s continuous position and velocity, and a static 16 Ã 16 downscaled image of the obstacles and goal position in the domain.
1602.02867#27
1602.02867#29
1602.02867
[ "1602.02261" ]
1602.02867#29
Value Iteration Networks
In principle, such an observation is sufï¬ cient to devise a â rough planâ for the particle to follow. As in our previous experiments, we investigate whether a policy trained on several instances of this domain with different start state, goal, and obstacle positions, would generalize to an unseen domain. For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. We generated 200 random training instances, and evaluate our performance on 40 different test instances from the same distribution.
1602.02867#28
1602.02867#30
1602.02867
[ "1602.02261" ]
1602.02867#30
Value Iteration Networks
Our VIN design is similar to the grid-world cases, with some important modiï¬ cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬ nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. We compare with a CNN-based reactive policy inspired by the state-of-the-art results in [21, 20], with 2 CNN layers for image processing, followed by a 3-layer fully connected network similar to the VIN reactive policy. Figure 4 shows the performance of the trained policies, measured as the ï¬
1602.02867#29
1602.02867#31
1602.02867
[ "1602.02261" ]
1602.02867#31
Value Iteration Networks
nal distance to the target. The VIN clearly outperforms the CNN on test domains. We also plot several trajectories of both policies on test domains, showing that VIN learned a more sensible generalization of the task. gee age ae ae gee age ae ne 4.4 WebNav Challenge In the previous experiments, the planning aspect of the task corresponded to 2D navigation. We now consider a more general domain: WebNav [23] â a language based search task on a graph. In WebNav [23], the agent needs to navigate the links of a website towards a goal web-page, specified by a short 4-sentence query. At each state s (web-page), the agent can observe average word- embedding features of the state ¢(s) and possible next states ¢(sâ ) (linked pages), and the features of the query (q), and based on that has to select which link to follow. In [23], the search was performed 7
1602.02867#30
1602.02867#32
1602.02867
[ "1602.02261" ]
1602.02867#32
Value Iteration Networks
on the Wikipedia website. Here, we report experiments on the â Wikipedia for Schoolsâ website, a simplified Wikipedia designed for children, with over 6000 pages and at most 292 links per page. In [23], a NN-based policy was proposed, which first learns a NN mapping from (¢(s), ¢(q)) toa hidden state vector h. The action is then selected according to 7(sâ |¢(s), 6(q)) x exp (h' 6(sâ )).
1602.02867#31
1602.02867#33
1602.02867
[ "1602.02261" ]
1602.02867#33
Value Iteration Networks
In essence, this policy is reactive, and relies on the word embedding features at each state to contain meaningful information about the path to the goal. Indeed, this property naturally holds for an encyclopedic website that is structured as a tree of categories, sub-categories, sub-sub-categories, etc. We sought to explore whether planning, based on a VIN, can lead to better performance in this task, with the intuition that a plan on a simplified model of the website can help guide the reactive policy in difficult queries. Therefore, we designed a VIN that plans on a small subset of the graph that contains only the Ist and 2nd level categories (< 3% of the graph), and their word-embedding features. Designing this VIN requires a different approach from the grid-world VINs described earlier, where the most challenging aspect is to define a meaningful mapping between nodes in the true graph and nodes in the smaller VIN graph. For the reward mapping fr, we chose a weighted similarity measure between the query features ¢(q), and the features of nodes in the small graph $(3). Thus, intuitively, nodes that are similar to the query should have high reward. The transitions were fixed based on the graph connectivity of the smaller VIN graph, which is known, though different from the true graph. The attention module was also based on a weighted similarity measure between the features of the possible next states $(sâ ) and the features of each node in the simplified graph ¢(5). The reactive policy part of the VIN was similar to the policy of described above. Note that by training such a VIN end-to-end, we are effectively learning how to exploit the small graph for doing better planning on the true, large graph. Both the VIN policy and the baseline reactive policy were trained by supervised learning, on random trajectories that start from the root node of the graph. Similarly to [23], a policy is said to succeed a query if all the correct predictions along the path are within its top-4 predictions. After training, the VIN policy performed mildly better than the baseline on 2000 held-out test queries when starting from the root node, achieving 1030 successful runs vs. 1025 for the baseline. However, when we tested the policies on a harder task of starting from a random position in the graph, VINs significantly outperformed the baseline, achieving 346 successful runs vs. 304 for the baseline, out of 4000 test queries.
1602.02867#32
1602.02867#34
1602.02867
[ "1602.02261" ]
1602.02867#34
Value Iteration Networks
These results confirm that indeed, when navigating a tree of categories from the root up, the features at each state contain meaningful information about the path to the goal, making a reactive policy sufficient. However, when starting the navigation from a different state, a reactive policy may fail to understand that it needs to first go back to the root and switch to a different branch in the tree. Our results indicate such a strategy can be better represented by a VIN. We remark that there is still room for further improvements of the WebNav results, e.g., by better models for reward and attention functions, and better word-embedding representations of text. 5 Conclusion and Outlook The introduction of powerful and scalable RL methods has opened up a range of new problems for deep learning. However, few recent works investigate policy architectures that are speciï¬ cally tailored for planning under uncertainty, and current RL theory and benchmarks rarely investigate the generalization properties of a trained policy [27, 21, 5]. This work takes a step in this direction, by exploring better generalizing policy representations. Our VIN policies learn an approximate planning computation relevant for solving the task, and we have shown that such a computation leads to better generalization in a diverse set of tasks, ranging from simple gridworlds that are amenable to value iteration, to continuous control, and even to navigation of Wikipedia links. In future work we intend to learn different planning computations, based on simulation [10], or optimal linear control [31], and combine them with reactive policies, to potentially develop RL solutions for task and motion planning [14].
1602.02867#33
1602.02867#35
1602.02867
[ "1602.02261" ]
1602.02867#35
Value Iteration Networks
# Acknowledgments This research was funded in part by Siemens, by ONR through a PECASE award, by the Army Research Ofï¬ ce through the MAST program, and by an NSF CAREER award (#1351028). A. T. was partially funded by the Viterbi Scholarship, Technion. Y. W. was partially funded by a DARPA PPAML program, contract FA8750-14-C-0011. 8 # References [1] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [2] D. Bertsekas. Dynamic Programming and Optimal Control, Vol II.
1602.02867#34
1602.02867#36
1602.02867
[ "1602.02261" ]
1602.02867#36
Value Iteration Networks
Athena Scientiï¬ c, 4th edition, 2012. [3] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬ cation. In Computer Vision and Pattern Recognition, pages 3642â 3649, 2012. [4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efï¬ cient approach to policy search. In ICML, 2011. [5] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016. [6] C. Farabet, C. Couprie, L. Najman, and Y. LeCun.
1602.02867#35
1602.02867#37
1602.02867
[ "1602.02261" ]
1602.02867#37
Value Iteration Networks
Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915â 1929, 2013. [7] C. Finn, M. Zhang, J. Fu, X. Tan, Z. McCarthy, E. Scharff, and S. Levine. Guided policy search code implementation, 2016. Software available from rll.berkeley.edu/gps. [8] K. Fukushima.
1602.02867#36
1602.02867#38
1602.02867
[ "1602.02261" ]
1602.02867#38
Value Iteration Networks
Neural network model for a mechanism of pattern recognition unaffected by shift in position- neocognitron. Transactions of the IECE, J62-A(10):658â 665, 1979. [9] A. Giusti et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 2016. [10] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang.
1602.02867#37
1602.02867#39
1602.02867
[ "1602.02261" ]
1602.02867#39
Value Iteration Networks
Deep learning for real-time atari game play using ofï¬ ine monte-carlo tree search planning. In NIPS, 2014. [11] X. Guo, S. Singh, R. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree search in atari games. arXiv:1604.07095, 2016. [12] R. Ilin, R. Kozma, and P. J. Werbos. Efï¬
1602.02867#38
1602.02867#40
1602.02867
[ "1602.02261" ]
1602.02867#40
Value Iteration Networks
cient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In ADPRL, 2007. [13] J. Joseph, A. Geramifard, J. W. Roberts, J. P. How, and N. Roy. Reinforcement learning with misspeciï¬ ed model classes. In ICRA, 2013. [14] L. P. Kaelbling and T. Lozano-Pérez. Hierarchical task and motion planning in the now. International Conference on Robotics and Automation (ICRA), pages 1470â 1477, 2011. [15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In NIPS, 2012. [16] Y. LeCun, L. Bottou, Y. Bengio, and P.
1602.02867#39
1602.02867#41
1602.02867
[ "1602.02261" ]
1602.02867#41
Value Iteration Networks
Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. [17] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014. [18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 17, 2016. [19] J. Long, E. Shelhamer, and T. Darrell.
1602.02867#40
1602.02867#42
1602.02867
[ "1602.02261" ]
1602.02867#42
Value Iteration Networks
Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3431â 3440, 2015. [20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, et al.
1602.02867#41
1602.02867#43
1602.02867
[ "1602.02261" ]
1602.02867#43
Value Iteration Networks
Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. [22] G. Neu and C. Szepesvári. Apprenticeship learning using inverse reinforcement learning and gradient methods. In UAI, 2007. [23] R. Nogueira and K. Cho. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv:1602.02261, 2016.
1602.02867#42
1602.02867#44
1602.02867
[ "1602.02261" ]
1602.02867#44
Value Iteration Networks
[24] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011. [25] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In International Joint Conference on Neural Networks. IEEE, 1990. [26] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz.
1602.02867#43
1602.02867#45
1602.02867
[ "1602.02261" ]
1602.02867#45
Value Iteration Networks
Trust region policy optimization. In ICML, 2015. [27] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 1998. [28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expres- sions. arXiv e-prints, abs/1605.02688, May 2016. [29] T. Tieleman and G. Hinton. Lecture 6.5. COURSERA: Neural Networks for Machine Learning, 2012. [30] E. Todorov, T. Erez, and Y.
1602.02867#44
1602.02867#46
1602.02867
[ "1602.02261" ]
1602.02867#46
Value Iteration Networks
Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â 5033. IEEE, 2012. [31] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, 2015.
1602.02867#45
1602.02867#47
1602.02867
[ "1602.02261" ]
1602.02867#47
Value Iteration Networks
[32] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. 9 # A Visualization of Learned Reward and Value In Figure 5 we plot the learned reward and value function for the gridworld task. The learned reward is very negative at obstacles, very positive at goal, and a slightly negative constant otherwise. The resulting value function has a peak at the goal, and a gradient pointing towards a direction to the goal around obstacles. This plot clearly shows that the VI block learned a useful planning computation. Learned Value Leamed Reward Learned Value Leamed Reward
1602.02867#46
1602.02867#48
1602.02867
[ "1602.02261" ]
1602.02867#48
Value Iteration Networks
Figure 5: Visualization of learned reward and value function. Left: a sample domain. Center: learned reward fR for this domain. Right: resulting value function (in VI block) for this domain. # B Weight Sharing The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, in Table 2 show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data. Training data 20% 50% 100% Pred. loss 0.06 0.05 0.05 VIN Succ. rate Traj. diff. 98.2% 0.106 99.4% 0.018 99.3% 0.089 VIN Untied Weights Traj. Succ. diff. rate 91.9% 0.094 95.2% 0.078 95.6% 0.068 Pred. loss 0.09 0.07 0.05 Table 2: Performance on 16 à 16 grid-world domain. Evaluation of the effect of VI module shared weights relative to data size. # C Gridworld with Reinforcement Learning We demonstrate that the value iteration network can be trained using reinforcement learning methods and achieves favorable generalization properties as compared to standard convolutional neural networks (CNNs). The overall setup of the experiment is as follows: we train policies parameterized by VINs and policies parameterized by convolutional networks on the same set of randomly generated gridworld maps in the same way (described below) and then test their performance on a held-out set of test maps, which was generated in the same way as the set of training maps but is disjoint from the training set. The MDP is what one would expect of a gridworld environment â the states are the positions on the map; the actions are movements up, down, left, and right; the rewards are +1 for reaching the goal, â 1 for falling into a hole, and â 0.01 otherwise (to encourage the policy to ï¬
1602.02867#47
1602.02867#49
1602.02867
[ "1602.02261" ]
1602.02867#49
Value Iteration Networks
nd the shortest path); the transitions are deterministic. Structure of the networks. The VINs used are similar to those described in the main body of the paper. After K value-iteration recurrences, we have approximate Q values for every state and action in the map. The attention selects only those for the current state, and these are converted to a 10 Network VIN CNN 16 Ã 16 8 Ã 8 90.9% 82.5% 86.9% 33.1% # Table 3: RL Results â performance on test maps.
1602.02867#48
1602.02867#50
1602.02867
[ "1602.02261" ]
1602.02867#50
Value Iteration Networks
probability distribution over actions using the softmax function. We use K = 10 for the 8 à 8 maps and K = 20 for the 16 à 16 maps. The convolutional networksâ structure was adapted to accommodate the size of the maps. For the 8à 8 maps, we use 50 ï¬ lters in the ï¬ rst layer and then 100 ï¬ lters in the second layer, all of size 3 à 3. Each of these layers is followed by a 2 à 2 max-pool. At the end we have a fully connected hidden layer with 100 hidden units, followed by a fully-connected layer to the (4) outputs, which are converted to probabilities using the softmax function. The network for the 16 à 16 maps is similar but uses three convolutional layers (with 50, 100, and 100 ï¬ lters respectively), the ï¬ rst two of which are 2 à 2 max-pooled, followed by two fully-connected hidden layers (200 and 100 hidden units respectively) before connecting to the outputs and performing softmax.
1602.02867#49
1602.02867#51
1602.02867
[ "1602.02261" ]
1602.02867#51
Value Iteration Networks
Training with a curriculum. To ensure that the policies are not simply memorizing speciï¬ c maps, we randomly select a map before each episode. But some maps are far more difï¬ cult than others, and the agent learns best when it stands a reasonable chance of reaching this goal. Thus we found it beneï¬ cial to begin training on the easiest maps and then gradually progress to more difï¬ cult maps. This is the idea of curriculum training. We consider curriculum training as a way to address the exploration problem. If a completely untrained agent is dropped into a very challenging map, it moves randomly and stands approximately zero chance of reaching the goal (and thus learning a useful reward). But even a random policy can consistently reach goals nearby and learn something useful in the process, e.g. to move toward the goal. Once the policy knows how to solve tasks of difï¬ culty n, it can more easily learn to solve tasks of difï¬ culty n + 1, as compared to a completely untrained policy. This strategy is well-aligned with how formal education is structured; you canâ t effectively learn calculus without knowing basic algebra.
1602.02867#50
1602.02867#52
1602.02867
[ "1602.02261" ]
1602.02867#52
Value Iteration Networks
Not all environments have an obvious difï¬ culty metric, but fortunately the gridworld task does. We deï¬ ne the difï¬ culty of a map as the length of the shortest path from the start state to the goal state. It is natural to start with difï¬ culty 1 (the start state and goal state are adjacent) and ramp up the difï¬ culty by one level once a certain threshold of â successâ is reached. In our experiments we use the average discounted return to assess progress and increase the difï¬ culty level from n to n + 1 when the average discounted return for an iteration exceeds 1 â n 35 . This rule was chosen empirically and takes into account the fact that higher difï¬ culty levels are more difï¬ cult to learn.
1602.02867#51
1602.02867#53
1602.02867
[ "1602.02261" ]
1602.02867#53
Value Iteration Networks
All networks were trained using the trust region policy optimization (TRPO) [26] algorithm, using publicly available code in the RLLab benchmark [5]. Testing. When testing, we ignore the exact rewards and measure simply whether or not the agent reaches the goal. For each map in the test set, we run an episode, noting if the policy succeeds in reaching the goal. The proportion of successful trials out of all the trials is reported for each network. (See Table 3.) On the 8 Ã 8 maps, we used the same number of training iterations on both types of networks to make the comparison as fair as possible. On the 16 Ã 16 maps, it became clear that the convolutional network was struggling, so we allowed it twice as many training iterations as the VIN, yet it still failed to achieve even a remotely similar level of performance on the test maps. (See left image of Figure 6.) We posit that this is because the VIN learns to plan, while the CNN simply follows a reactive policy. Though the CNN policy performs reasonably well on the smaller domains, it does not scale to larger domains, while the VIN does. (See right image of Figure 6.)
1602.02867#52
1602.02867#54
1602.02867
[ "1602.02261" ]
1602.02867#54
Value Iteration Networks
# D Technical Details for Experiments We report the full technical details used for training our networks. 11 â VIN â _cnn ; Ss 5 i Success rate on test set \ \ soo ood 18003000250 3000 Training epochs (1000 transitions) VIN i } I | i i o 2 7 G 7 Fr 2 oiicuty â VIN VIN â _cnn ; i } I Ss | i i 5 i Success rate on test set \ \ soo ood 18003000250 3000 o 2 7 G 7 Fr 2 Training epochs (1000 transitions) oiicuty Figure 6: RL results â performance of VIN and CNN on 16 à 16 test maps. Left: Performance on all maps as a function of amount of training. Right: Success rate on test maps of increasing difï¬ culty. # D.1 Grid-world Domain Our training set consists of N; = 5000 random grid-world instances, with N; = 7 shortest-path trajectories (calculated using an optimal planning algorithm) from a random start-state to a random goal-state for each instance; a total of N; x N; trajectories. For each state s = (7, j) in each trajectory, we produce a (2 x m x n)-sized observation image Simage- The first channel of simage encodes the obstacle presence (1 for obstacle, 0 otherwise), while the second channel encodes the goal position (1 at the goal, 0 otherwise). The full observation vector is 4(s) = [s, Simage]- In addition, for each state we produce a label a that encodes the action (one of 8 directions) that an optimal shortest-path policy would take in that state. We design a VIN for this task as follows. The state space S was chosen to be a m x n grid-world, similar to the true state space S/*| The reward R in this space can be represented by an m x n map, and we chose the reward mapping fz to be a CNN with Simage as its input, one layer with 150 kernels of size 3 x 3, and a second layer with one 3 x 3 filter to output R. Thus, maps the image of obstacles and goal to a â reward imageâ
1602.02867#53
1602.02867#55
1602.02867
[ "1602.02261" ]
1602.02867#55
Value Iteration Networks
. The transitions P were defined as 3 x 3 convolution kernels in the VI block, and exploit the fact that transitions in the grid-world are local. Note that the transitions defined this way do not depend on the state s. Interestingly, we shall see that the network learned rewards and transitions that nevertheless enable it to successfully plan in this task. For the attention module, since there is a one-to-one mapping between the agent position in S and in S, we chose a trivial approach that selects the Q values in the VI block for the state in the real MDP s, i.e., w(s) = Q(s,-). The final reactive policy is a fully connected softmax output layer with weights W, Tre(-|w(s)) x exp (WT y(s)) . We trained several neural-network policies based on a multi-class logistic regression loss function using stochastic gradient descent, with an RMSProp step size [29], implemented in the Theano [28] library. We compare the policies: VIN network We used the VIN model of Section 3 as described above, with 10 channels for the q layer in the VI block.
1602.02867#54
1602.02867#56
1602.02867
[ "1602.02261" ]
1602.02867#56
Value Iteration Networks
The recurrence K was set relative to the problem size: K = 10 for 8 à 8 domains, K = 20 for 16 à 16 domains, and K = 36 for 28 à 28 domains. The guideline for choosing these values was to keep the network small while guaranteeing that goal information can ï¬ ow to every state in the map. CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers with [50, 50, 100, 100, 100] kernels of size 3 Ã
1602.02867#55
1602.02867#57
1602.02867
[ "1602.02261" ]
1602.02867#57
Value Iteration Networks
3, and 2 à 2 max-pooling after the ï¬ rst and third layers. The ï¬ nal layer is fully connected, and maps to a softmax over actions. To represent the current state, we added to simage a channel that encodes the current position (1 at the current state, 0 otherwise). 4For a particular conï¬ guration of obstacles, the true grid-world domain can be captured by a m à n state space with the obstacles encoded in the MDP transitions, as in our notation. For a general obstacle conï¬ guration, the obstacle positions have to also be encoded in the state. The VIN was able to learn a policy for a general obstacle conï¬ guration by planning in a m à n state space by also taking into account the observation of the map.
1602.02867#56
1602.02867#58
1602.02867
[ "1602.02261" ]
1602.02867#58
Value Iteration Networks
12 Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬ rst layer has a ï¬ lter that spans the whole image, to properly convey information from the goal to every other state. The ï¬ rst convolution layer has 150 ï¬ lters of size (2m â 1) à (2n â 1), which span the whole image and can convey information about the goal to every pixel. The second layer has 150 ï¬ lters of size 1 à 1, and the third layer has 10 ï¬ lters of size 1 à 1, to produce an output sized 10 à m à n, similarly to the ¯Q layer in our VIN. Similarly to the attention mechanism in the VIN, the values that correspond to the current state (pixel) are passed to a fully connected softmax output layer.
1602.02867#57
1602.02867#59
1602.02867
[ "1602.02261" ]
1602.02867#59
Value Iteration Networks
# D.2 Mars Domain We consider the problem of autonomously navigating the surface of Mars by a rover such as the Mars Science Laboratory (MSL) (Lockwood, 2006) over long-distance trajectories. The MSL has a limited ability for climbing high-degree slopes, and its path-planning algorithm should therefore avoid navigating into high-slope areas. In our experiment, we plan trajectories that avoid slopes of 10 degrees or more, using overhead terrain images from the High Resolution Imaging Science Experiment (HiRISE) (McEwen et al., 2007). The HiRISE data consists of grayscale images of the Mars terrain, and matching elevation data, accurate to tens of centimeters. We used an image of a 33.3km by 6.3km area at 49.96 degrees latitude and 219.2 degrees longitude, with a 10.5 sq. meters / pixel resolution.
1602.02867#58
1602.02867#60
1602.02867
[ "1602.02261" ]
1602.02867#60
Value Iteration Networks
Each domain is a 128 à 128 image patch, on which we deï¬ ned a 16 à 16 grid-world, where each state was considered an obstacle if its corresponding 8 à 8 image patch contained an angle of 10 degrees or more, evaluated using an additional elevation data. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. Our goal is to train a network that predicts the shortest-path trajectory directly from the terrain image data. We emphasize that the ground-truth elevation data is not part of the input, and the elevation therefore must be inferred (if needed) from the terrain image itself.
1602.02867#59
1602.02867#61
1602.02867
[ "1602.02261" ]
1602.02867#61
Value Iteration Networks
Our VIN design follows the model of Section 4.1. In this case, however, instead of feeding in the obstacle map, we feed in the raw terrain image, and accordingly modify the reward mapping fR with 2 additional CNN layers for processing the image: the ï¬ rst with 6 kernels of size 5 à 5 and 4 à 4 max-pooling, and the second with a 12 kernels of size 3 à 3 and 2 à 2 max-pooling. The resulting 12 à m à n tensor is concatenated with the goal image, and passed to a third layer with 150 kernels of size 3 à 3 and a fourth layer with one 3 à 3 ï¬ lter to output ¯R. The state inputs and output labels remain as in the grid-world experiments. We emphasize that the whole network is trained end-to-end, without pre-training the input ï¬
1602.02867#60
1602.02867#62
1602.02867
[ "1602.02261" ]
1602.02867#62
Value Iteration Networks
lters. In Table 4 we present our results for training a m = n = 16 map from a 10K image-patch dataset, with 7 random trajectories per patch, evaluated on a held-out test set of 1K patches. Figure 3 shows an instance of the input image, the obstacles, the shortest-path trajectory, and the trajectory predicted by our method. To put the 84.8% success rate in context, we compare with the best performance achievable without access to the elevation data. To make this comparison, we trained a CNN to classify whether an 8 Ã 8 patch is an obstacle or not.
1602.02867#61
1602.02867#63
1602.02867
[ "1602.02261" ]
1602.02867#63
Value Iteration Networks
This classiï¬ er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬ cations from the elevation map (we reiterate that the VIN network did not have access to these ground-truth obstacle classiï¬ cation labels during training or testing). Training this classiï¬ er is a standard binary classiï¬ cation problem, and its performance represents the best obstacle identiï¬ cation possible with our CNN in this domain. The best-achievable shortest-path prediction is then deï¬ ned as the shortest path in an obstacle map generated by this classiï¬
1602.02867#62
1602.02867#64
1602.02867
[ "1602.02261" ]
1602.02867#64
Value Iteration Networks
er from the raw image. The results of this optimal predictor are reported in Table 1. The 90.3% success rate shows that obstacle identiï¬ cation from the raw image is indeed challenging. Thus, the success rate of the VIN network, which was trained without any obstacle labels, and had to â ï¬ gure outâ the planning process is quite remarkable. # D.3 Continuous Control For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. GPS works by learning time- varying iLQG controllers for each domain, and then ï¬ tting the controllers to a single NN policy using
1602.02867#63
1602.02867#65
1602.02867
[ "1602.02261" ]
1602.02867#65
Value Iteration Networks
13 VIN Best achievable Pred. loss 0.089 - Traj. diff. 84.8% 0.016 90.3% 0.0089 Succ. rate Table 4: Performance of VINs on the Mars domain. For comparison, the performance of a planner that used obstacle predictions trained from labeled obstacle data is shown. This upper bound on performance demonstrates the difï¬ culty in identifying obstacles from the raw image data. Remarkably, the VIN achieved close performance without access to any labeled data about the obstacles. supervised learning. This process is repeated for several iterations, and a special cost function is used to enforce an agreement between the trajectory distribution of the iLQG and NN controllers. We refer to [17, 7] for the full algorithm details. For our task, we ran 10 iterations of iLQG, with the cost being a quadratic distance to the goal, followed by one iteration of NN policy ï¬
1602.02867#64
1602.02867#66
1602.02867
[ "1602.02261" ]
1602.02867#66
Value Iteration Networks
tting. This allows us to cleanly compare VINs to other policies without GPS-speciï¬ c effects. Our VIN design is similar to the grid-world cases: the state space ¯S is a 16 à 16 grid-world, and the transitions ¯P are 3 à 3 convolution kernels in the VI block, similar to the grid-world of Section 4.1. However, we made some important modiï¬ cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬ nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions (for example, the transitions for an action to go north-west would be γ in the top left corner and zeros otherwise), before training end-to-end. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. The reward mapping fR is a CNN with simage as its input, one layer with 150 kernels of size 3 à 3, and a second layer with one 3 à 3 ï¬ lter to output ¯R. # D.4 WebNav â WebNavâ [23] is a recently proposed goal-driven web navigation benchmark. In WebNav, web pages and links from some website form a directed graph G(S, E). The agent is presented with a query text, which consists of N, sentences from a target page at most N;, hops away from the starting page. The goal for the agent is to navigate to that target page from the starting page via clicking at most Ny links per page. Here, we choose N;, = Ny = Np = 4. In [23], the agent receives a reward of 1 when reaching the target page via any path no longer than 10 hops. For evaluation convenience, in our experiment, the agent can receive a reward only if it reaches the destination via the shortest path, which makes the task much harder. We measure the top-1 and top-4 prediction accuracy as well as the average reward for the baseline and our VIN model.
1602.02867#65
1602.02867#67
1602.02867
[ "1602.02261" ]
1602.02867#67
Value Iteration Networks
For every page s, the valid transitions are A, = {sâ : (s,sâ ) â ¬ E}. For every web page s and every query text g, we utilize the bag-of-words model with pretrained word embedding provided by [23] to produce feature vectors ¢(s) and $(q). The agent should choose at most NV, valid actions from A, = {sâ : (s,sâ ) â ¬ E} based on the current s and q.
1602.02867#66
1602.02867#68
1602.02867
[ "1602.02261" ]
1602.02867#68
Value Iteration Networks
The baseline method of uses a single tanh-layer neural net parametrized by W to compute a hidden vector h: h(s,q) = tanh (w aia \) . The final baseline policy is computed via Tsi(sâ |8, q) o exp (h(s,q)'(sâ )) for sâ â ¬ Ag. We design a VIN for this task as follows. We firstly selected a smaller website as the approximate graph G(S, E), and choose S as the states in VI. For query q and a page § in S, we compute the reward R(s) by fr(3|q) = tanh ((Wre(a) + bp)! o(3)) with parameters Wr (diagonal matrix) and br (vector). For transition, since the graph remains unchanged, P is fixed. For the attention module II(V*, s), we compute it by II(V*,s) = });cg sigmoid ((Win(s) + bn)" o(3)) v*(s), where Wy and by are parameters and Wy is diagonal. Moreover, we compute the coefficient 7 based on the query q and the state s using a tanh-layer neural net parametrized by W,: y(s,q) =
1602.02867#67
1602.02867#69
1602.02867
[ "1602.02261" ]
1602.02867#69
Value Iteration Networks
14 Network Top-1 Test Err. Top-4 Test Err. Avg. Reward BSL VIN 52.019% 50.562% 24.424% 26.055% 0.27779 0.30389 Table 5: Performance on the full wikipedia dataset. # aay tanh Wγ . Finally, we combine the VI module and the baseline method as our VIN model by simply adding the outputs from these two networks together. In addition to the experiments reported in the main text, we performed experiments on the full wikipedia, using â wikipedia for schoolsâ
1602.02867#68
1602.02867#70
1602.02867
[ "1602.02261" ]
1602.02867#70
Value Iteration Networks
as the graph for VIN planning. We report our preliminary results here. Full wikipedia website: The full wikipedia dataset consists 779169 training queries (3 million training samples) and 20004 testing queries (76664 testing samples) over 4.8 million pages with maximum 300 links per page. We use the whole WikiSchool website as our approximate graph and set K = 4. In VIN, to accelerate training, we ï¬ rstly only train the VI module with K = 0. Then, we ï¬ x ¯R obtained in the K = 0 case and jointly train the whole model with K = 4. The results are shown in Tab. 5 VIN achieves 1.5% better prediction accuracy than the baseline. Interestingly, with only 1.5% prediction accuracy enhancement, VIN achieves 2.5% better success rate than the baseline: note that the agent can only success when making 4 consecutive correct predictions. This indicates the VI does provide useful high-level planning information. # D.5 Additional Technical Comments Runtime:
1602.02867#69
1602.02867#71
1602.02867
[ "1602.02261" ]
1602.02867#71
Value Iteration Networks
For the 2D domains, different samples from the same domain share the same VI com- putation, since they have the same observation. Therefore, a single VI computation is required for samples from the same domain. Using this, and GPU code (Theano), VINs are not much slower than the baselines. For the language task, however, since Theano doesnâ t support convolutions on graphs nor sparse operations on GPU, VINs were considerably slower in our implementation. # E Hierarchical VI Modules The number of VI iterations K required in the VIN depends on the problem size. Consider, for example, a grid-world in which the goal is located L steps away from some state s. Then, at least L iterations of VI are required to convey the reward information from the goal to state s, and clearly, any action prediction obtained with less than L VI iterations at state s is unaware of the goal location, and therefore unacceptable. To convey reward information faster in VI, and reduce the effective K, we propose to perform VI at multiple levels of resolution. We term this model a hierarchical VI Network (HVIN), due to its similarity with hierarchical planning algorithms. In a HVIN, a copy of the input down-sampled by a factor of d is ï¬ rst fed into a VI module termed the high-level VI module. The down-sampling offers a dà speedup of information transmission in the map, at the price of reduced accuracy. The value layer of the high-level VI module is then up-sampled, and added as an additional input channel to the input of the standard VI module. Thus, the high-level VI module learns a mapping from down-sampled image features to a suitable reward-shaping for the nominal VI module. The full HVIN model is depicted in Figure 7. This model can easily be extended to include multiple levels of hierarchy. Table 6 shows the performance of the HVIN module in the grid-world task, compared to the VIN results reported in the main text.
1602.02867#70
1602.02867#72
1602.02867
[ "1602.02261" ]
1602.02867#72
Value Iteration Networks
We used a 2 Ã 2 down-sampling layer. Similarly to the standard VIN, 3 Ã 3 convolution kernels, 150 channels for each hidden layer H (for both the down-sampled image, and standard image), and 10 channels for the q layer in each VI block. Similarly to the VIN networks, the recurrence K was set relative to the problem size, taking into account the down- sampling factor: K = 4 for 8 Ã 8 domains, K = 10 for 16 Ã 16 domains, and K = 16 for 28 Ã 28 domains (in comparison, the respective K values for standard VINs were 10, 20, and 36). The HVINs demonstrated better performance for the larger 28 Ã 28 map, which we attribute to the improved information transmission in the hierarchical VI module.
1602.02867#71
1602.02867#73
1602.02867
[ "1602.02261" ]
1602.02867#73
Value Iteration Networks
15 Hierarchical VI Network Observation Reward vl Module Reward R Map I up-sample down x sample lew % e 4 Value a K recurrence High-level VI block Figure 7: Hierarchical VI network. A copy of the input is ï¬ rst fed into a convolution layer and then downsampled. This signal is then fed into a VI module to produce a coarse value function, corresponding to the upper level in the hierarchy. This value function is then up-sampled, and added as an additional channel in the reward layer of a standard VI module (lower level of the hierarchy).
1602.02867#72
1602.02867#74
1602.02867
[ "1602.02261" ]
1602.02867#74
Value Iteration Networks
Domain 8 Ã 8 16 Ã 16 28 Ã 28 Prediction loss 0.004 0.05 0.11 VIN Success Trajectory rate 99.6% 99.3% 97% diff. 0.001 0.089 0.086 Hierarchical VIN Prediction loss 0.005 0.03 0.05 Success Trajectory rate 99.3% 99% 98.1% diff. 0.0 0.007 0.037 Table 6: HVIN performance on grid-world domain.
1602.02867#73
1602.02867#75
1602.02867
[ "1602.02261" ]
1602.02867#75
Value Iteration Networks
16
1602.02867#74
1602.02867
[ "1602.02261" ]
1602.02410#0
Exploring the Limits of Language Modeling
6 1 0 2 b e F 1 1 ] L C . s c [ 2 v 0 1 4 2 0 . 2 0 6 1 : v i X r a # Exploring the Limits of Language Modeling Rafal Jozefowicz Oriol Vinyals Mike Schuster Noam Shazeer Yonghui Wu [email protected] [email protected] [email protected] [email protected] [email protected] Google Brain Abstract In this work we explore recent advances in Re- current Neural Networks for large scale Lan- guage Modeling, a task central to language un- derstanding. We extend current models to deal with two key challenges present in this task: cor- pora and vocabulary sizes, and complex, long term structure of language. We perform an ex- haustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Bench- mark. Our best single model signiï¬ cantly im- proves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of param- eters by a factor of 20), while an ensemble of models sets a new record by improving perplex- ity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
1602.02410#1
1602.02410
[ "1512.00103" ]
1602.02410#1
Exploring the Limits of Language Modeling
# 1. Introduction Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sen- tences not only encode complexities of language such as grammatical structure, but also distill a fair amount of in- formation about the knowledge that a corpora may con- tain. Indeed, models that are able to assign a low probabil- ity to sentences that are grammatically correct but unlikely may help other tasks in fundamental language understand- ing like question answering, machine translation, or text summarization. LMs have played a key role in traditional NLP tasks such as speech recognition (Mikolov et al., 2010; Arisoy et al., 2012), machine translation (Schwenk et al., 2012; Vaswani et al.), or text summarization (Rush et al., 2015; Filippova et al., 2015). Often (although not always), training better language models improves the underlying metrics of the downstream task (such as word error rate for speech recog- nition, or BLEU score for translation), which makes the task of training better LMs valuable by itself. Further, when trained on vast amounts of data, language models compactly extract knowledge encoded in the train- ing data. For example, when trained on movie subti- tles (Serban et al., 2015; Vinyals & Le, 2015), these lan- guage models are able to generate basic answers to ques- tions about object colors, facts about people, etc. Lastly, recently proposed sequence-to-sequence models employ conditional language models (Mikolov & Zweig, 2012) as their key component to solve diverse tasks like machine translation (Sutskever et al., 2014; Cho et al., 2014; Kalch- brenner et al., 2014) or video generation (Srivastava et al., 2015a). Deep Learning and Recurrent Neural Networks (RNNs) have fueled language modeling research in the past years as it allowed researchers to explore many tasks for which the strong conditional independence assumptions are unre- alistic.
1602.02410#0
1602.02410#2
1602.02410
[ "1512.00103" ]
1602.02410#2
Exploring the Limits of Language Modeling
Despite the fact that simpler models, such as N- grams, only use a short history of previous words to predict the next word, they are still a key component to high qual- ity, low perplexity LMs. Indeed, most recent work on large scale LM has shown that RNNs are great in combination with N-grams, as they may have different strengths that complement N-gram models, but worse when considered in isolation (Mikolov et al., 2011; Mikolov, 2012; Chelba et al., 2013; Williams et al., 2015; Ji et al., 2015a; Shazeer et al., 2015). We believe that, despite much work being devoted to small data sets like the Penn Tree Bank (PTB) (Marcus et al., 1993), research on larger tasks is very relevant as overï¬ t- ting is not the main limitation in current language model- ing, but is the main characteristic of the PTB task. Results on larger corpora usually show better what matters as many ideas work well on small data sets but fail to improve on Exploring the Limits of Language Modeling CAT Cc A T Char CNN i ae Cc | A T | Char CNN | Char CNN THE T| [4] [e] || THE | {a) (b) (ce) â
1602.02410#1
1602.02410#3
1602.02410
[ "1512.00103" ]
1602.02410#3
Exploring the Limits of Language Modeling
¢ We show that an ensemble of a number of different models can bring down perplexity on this task to 23.7, a large improvement compared to current state-of-art. â ¢ We share the model and recipes in order to help and motivate further research in this area. In Section 2 we review important concepts and previous work on language modeling. Section 3 presents our contri- butions to the ï¬ eld of neural language modeling, emphasiz- ing large scale recurrent neural network training. Sections 4 and 5 aim at exhaustively describing our experience and understanding throughout the project, as well as emplacing our work relative to other known approaches. # 2. Related Work Figure 1. A high-level diagram of the models presented in this pa- per. (a) is a standard LSTM LM. (b) represents an LM where both input and Softmax embeddings have been replaced by a character CNN. In (c) we replace the Softmax by a next character prediction LSTM network. In this section we describe previous work relevant to the approaches discussed in this paper. A more detailed dis- cussion on language modeling research is provided in (Mikolov, 2012). # 2.1. Language Models larger data sets. Further, given current hardware trends and vast amounts of text available on the Web, it is much more straightforward to tackle large scale modeling than it used to be. Thus, we hope that our work will help and motivate researchers to work on traditional LM beyond PTB â for this purpose, we will open-source our models and training recipes. We focused on a well known, large scale LM benchmark: the One Billion Word Benchmark data set (Chelba et al., 2013). This data set is much larger than PTB (one thou- sand fold, 800k word vocabulary and 1B words training data) and far more challenging. Similar to Imagenet (Deng et al., 2009), which helped advance computer vision, we believe that releasing and working on large data sets and models with clear benchmarks will help advance Language Modeling. The contributions of our work are as follows:
1602.02410#2
1602.02410#4
1602.02410
[ "1512.00103" ]
1602.02410#4
Exploring the Limits of Language Modeling
â ¢ We explored, extended and tried to unify some of the current research on large scale LM. â ¢ Speciï¬ cally, we designed a Softmax loss which is based on character level CNNs, is efï¬ cient to train, and is as precise as a full Softmax which has orders of magnitude more parameters. Language Modeling (LM) has been a central task in NLP. The goal of LM is to learn a probability distribution over sequences of symbols pertaining to a language. Much work has been done on both parametric (e.g., log-linear models) and non-parametric approaches (e.g., count-based LMs). Count-based approaches (based on statistics of N-grams) typically add smoothing which account for unseen (yet pos- sible) sequences, and have been quite successful. To this extent, Kneser-Ney smoothed 5-gram models (Kneser & Ney, 1995) are a fairly strong baseline which, for large amounts of training data, have challenged other paramet- ric approaches based on Neural Networks (Bengio et al., 2006). Most of our work is based on Recurrent Neural Networks (RNN) models which retain long term dependencies. To this extent, we used the Long-Short Term Memory model (Hochreiter & Schmidhuber, 1997) which uses a gating mechanism (Gers et al., 2000) to ensure proper propaga- tion of information through many time steps. Much work has been done on small and large scale RNN-based LMs (Mikolov et al., 2010; Mikolov, 2012; Chelba et al., 2013; Zaremba et al., 2014; Williams et al., 2015; Ji et al., 2015a; Wang & Cho, 2015; Ji et al., 2015b). The architectures that we considered in this paper are represented in Figure 1.
1602.02410#3
1602.02410#5
1602.02410
[ "1512.00103" ]
1602.02410#5
Exploring the Limits of Language Modeling
â ¢ Our study yielded signiï¬ cant improvements to the state-of-the-art on a well known, large scale LM task: from 51.3 down to 30.0 perplexity for single models whilst reducing the number of parameters by a factor of 20. In our work, we train models on the popular One Bil- lion Word Benchmark, which can be considered to be a medium-sized data set for count-based LMs but a very large data set for NN-based LMs. This regime is most interesting to us as we believe learning a very good model of human language is a complex task which will require large models, Exploring the Limits of Language Modeling and thus large amounts of data. Further advances in data availability and computational resources helped our study. We argue this leap in scale enabled tremendous advances in deep learning. A clear example found in computer vision is Imagenet (Deng et al., 2009), which enabled learning com- plex vision models from large amounts of data (Krizhevsky et al., 2012). A crucial aspect which we discuss in detail in later sections is the size of our models. Despite the large number of pa- rameters, we try to minimize computation as much as pos- sible by adopting a strategy proposed in (Sak et al., 2014) of projecting a relatively big recurrent state space down so that the matrices involved remain relatively small, yet the model has large memory capacity. # 2.2. Convolutional Embedding Models inner product zw = hT ew where h is a context vector and ew is a â word embeddingâ for w. The main challenge when |V | is very large (in the order of one million in this paper) is the fact that computing all inner products between h and all embeddings becomes prohibitively slow during training (even when exploiting matrix-matrix multiplications and modern GPUs).
1602.02410#4
1602.02410#6
1602.02410
[ "1512.00103" ]
1602.02410#6
Exploring the Limits of Language Modeling
Several approaches have been proposed to cope with the scaling is- sue: importance sampling (Bengio et al., 2003; Bengio & Sen´ecal, 2008), Noise Contrastive Estimation (NCE) (Gut- mann & Hyv¨arinen, 2010; Mnih & Kavukcuoglu, 2013), self normalizing partition functions (Vincent et al., 2015) or Hierarchical Softmax (Morin & Bengio, 2005; Mnih & Hinton, 2009) â they all offer good solutions to this prob- lem. We found importance sampling to be quite effective on this task, and explain the connection between it and NCE in the following section, as they are closely related. There is an increased interest in incorporating character- level inputs to build word embeddings for various NLP problems, including part-of-speech tagging, parsing and language modeling (Ling et al., 2015; Kim et al., 2015; Ballesteros et al., 2015). The additional character informa- tion has been shown useful on relatively small benchmark data sets.
1602.02410#5
1602.02410#7
1602.02410
[ "1512.00103" ]
1602.02410#7
Exploring the Limits of Language Modeling
# 3. Language Modeling Improvements Recurrent Neural Networks based LMs employ the chain rule to model joint probabilities over word sequences: The approach proposed in (Ling et al., 2015) builds word embeddings using bidirectional LSTMs (Schuster & Pali- wal, 1997; Graves & Schmidhuber, 2005) over the charac- ters. The recurrent networks process sequences of charac- ters from both sides and their ï¬ nal state vectors are concate- nated. The resulting representation is then fed to a Neural Network. This model achieved very good results on a part- of-speech tagging task.
1602.02410#6
1602.02410#8
1602.02410
[ "1512.00103" ]
1602.02410#8
Exploring the Limits of Language Modeling
N p(wi,...,WN) = [rites +++, Wi-1) i=1 where the context of all previous words is encoded with an LSTM, and the probability over words uses a Softmax (see Figure 1(a)). # 3.1. Relationship between Noise Contrastive Estimation and Importance Sampling In (Kim et al., 2015), the words characters are processed by a 1-d CNN (Le Cun et al., 1990) with max-pooling across the sequence for each convolutional feature. The result- ing features are fed to a 2-layer highway network (Srivas- tava et al., 2015b), which allows the embedding to learn se- mantic representations. The model was evaluated on small- scale language modeling experiments for various languages and matched the best results on the PTB data set despite having 60% fewer parameters. # 2.3. Softmax Over Large Vocabularies As discussed in Section 2.3, a large scale Softmax is neces- sary for training good LMs because of the vocabulary size. A Hierarchical Softmax (Mnih & Hinton, 2009) employs a tree in which the probability distribution over words is decomposed into a product of two probabilities for each word, greatly reducing training and inference time as only the path speciï¬ ed by the hierarchy needs to be computed and updated. Choosing a good hierarchy is important for obtaining good results and we did not explore this approach further for this paper as sampling methods worked well for our setup. Assigning probability distributions over large vocabularies is computationally challenging. For modeling language, maximizing log-likelihood of a given word sequence leads to optimizing cross-entropy between the target probability distribution (e.g., the target word we should be predicting), and our model predictions p. Generally, predictions come from a linear layer followed by a Softmax non-linearity: where zw is the logit correspond- p(w) = Sampling approaches are only useful during training, as they propose an approximation to the loss which is cheap to compute (also in a distributed setting) â however, at infer- ence time one still has to compute the normalization term over all words. Noise Contrastive Estimation (NCE) pro- poses to consider a surrogate binary classiï¬ cation task in which a classiï¬
1602.02410#7
1602.02410#9
1602.02410
[ "1512.00103" ]
1602.02410#9
Exploring the Limits of Language Modeling
er is trained to discriminate between true data, or samples coming from some arbitrary distribution. If both the noise and data distributions were known, the Exploring the Limits of Language Modeling optimal classiï¬ er would be: # 3.2. CNN Softmax p(Y = true|w) = pd(w) pd(w) + kpn(w) where Y is the binary random variable indicating whether w comes from the true data distribution, k is the number of negative samples per positive word, and pd and pn are the data and noise distribution respectively (we dropped any dependency on previous words for notational simplicity). It is easy to show that if we train a logistic classifier po(Y = true|w) = o(se(w,h) â logkp,(w)) where o is the logistic function, then, pâ (w) = softmax(so(w, h)) is a good approximation of pg(w) (sq is a logit which e.g. an LSTM LM computes). The other technique, which is based on importance sam- pling (IS), proposes to directly approximate the partition function (which comprises a sum over all words) with an estimate of it through importance sampling. Though the methods look superï¬ cially similar, we will derive a similar surrogate classiï¬ cation task akin to NCE which arrives at IS, showing a strong connection between the two. Suppose that, instead of having a binary task to decide if a word comes from the data or from the noise distribution, we want to identify the words coming from the true data distribution in a set W = {w1, . . . , wk+1}, comprised of k noise samples and one data distribution sample. Thus, we can train a multiclass loss over a multinomial random variable Y which maximizes log p(Y = 1|W ), assuming w.l.o.g. that w1 â W is always the word coming from true data. By Bayes rule, and ignoring terms that are constant with respect to Y , we can write: The character-level features allow for a smoother and com- pact parametrization of the word embeddings. Recent ef- forts on small scale language modeling have used CNN character embeddings for the input embeddings (Kim et al., 2015). Although not as straightforward, we propose an ex- tension to this idea to also reduce the number of param- eters of the Softmax layer.
1602.02410#8
1602.02410#10
1602.02410
[ "1512.00103" ]
1602.02410#10
Exploring the Limits of Language Modeling
Recall from Section 2.3 that the Softmax computes a logit as zw = hT ew where h is a context vector and ew the word embedding. Instead of building a matrix of |V | Ã |h| (whose rows correspond to ew), we produce ew with a CNN over the characters of w as ew = CN N (charsw) â we call this a CNN Softmax. We used the same network architecture to dynamically gener- ate the Softmax word embeddings without sharing the pa- rameters with the input word-embedding sub-network. For inference, the vectors ew can be precomputed, so there is no computational complexity increase w.r.t. the regular Soft- max. We note that, when using an importance sampling loss such as the one described in Section 3.1, only a few logits have non-zero gradient (those corresponding to the true and sam- pled words). With a Softmax where ew are independently learned word embeddings, this is not a problem. But we observed that, when using a CNN, all the logits become tied as the function mapping from w to ew is quite smooth. As a result, a much smaller learning rate had to be used. Even with this, the model lacks capacity to differentiate between words that have very different meanings but that are spelled similarly. Thus, a reasonable compromise was to add a small correction factor which is learned per word, such that: p(Y = k|W ) â Y pd(wk) pn(wk) and, following a similar argument than for NCE, if we de- fine p(Y = k|W) = softmax(sg(we) â log pn (we)) then p(w) = softmax(sg(w,h)) is a good approximation of pa(word). Note that the only difference between NCE and IS is that, in NCE, we define a binary classification task between true or noise words with a logistic loss, whereas in IS we define a multiclass classification problem with a Softmax and cross entropy loss. We hope that our deriva- tion helps clarify the similarities and differences between the two. In particular, we observe that IS, as it optimizes a multiclass classification task (in contrast to solving a bi- nary task), may be a better choice. Indeed, the updates to the logits with IS are tied whereas in NCE they are inde- pendent.
1602.02410#9
1602.02410#11
1602.02410
[ "1512.00103" ]
1602.02410#11
Exploring the Limits of Language Modeling
zw = hT CN N (charsw) + hT M corrw where M is a matrix projecting a low-dimensional embed- ding vector corrw back up to the dimensionality of the pro- jected LSTM hidden state of h. This amounts to adding a bottleneck linear layer, and brings the CNN Softmax much closer to our best result, as can be seen in Table 1, where adding a 128-dim correction halves the gap between regu- lar and the CNN Softmax. Aside from a big reduction in the number of parameters and incorporating morphological knowledge from words, the other beneï¬ t of this approach is that out-of-vocabulary (OOV) words can easily be scored. This may be useful for other problems such as Machine Translation where han- dling out-of-vocabulary words is very important (Luong et al., 2014). This approach also allows parallel training over various data sets since the model is no longer explic- itly parametrized by the vocabulary size â or the language. This has shown to help when using byte-level input embed- dings for named entity recognition (Gillick et al., 2015), Exploring the Limits of Language Modeling and we hope it will enable similar gains when used to map onto words. # 3.3. Char LSTM Predictions The CNN Softmax layer can handle arbitrary words and is much more efï¬ cient in terms of number of parameters than the full Softmax matrix. It is, though, still considerably slow, as to evaluate perplexities we need to compute the partition function. A class of models that solve this prob- lem more efï¬ ciently are character-level LSTMs (Sutskever et al., 2011; Graves, 2013). They make predictions one character at a time, thus allowing to compute probabili- ties over a much smaller vocabulary. On the other hand, these models are more difï¬ cult to train and seem to per- form worse even in small tasks like PTB (Graves, 2013). Most likely this is due to the sequences becoming much longer on average as the LSTM reads the input character by character instead of word by word.
1602.02410#10
1602.02410#12
1602.02410
[ "1512.00103" ]
1602.02410#12
Exploring the Limits of Language Modeling
# 4.2. Model Setup The typical measure used for reporting progress in language modeling is perplexity, which is the aver- age per-word log-probability on the holdout data set: eâ 1 ln pwi . We follow the standard procedure and sum over all the words (including the end of sentence symbol). We used the 1B Word Benchmark data set without any pre- processing. Given the shufï¬ ed sentences, they are input to the network as a batch of independent streams of words. Whenever a sentence ends, a new one starts without any padding (thus maximizing the occupancy per batch). For the models that consume characters as inputs or as tar- gets, each word is fed to the model as a sequence of charac- ter IDs of preespeciï¬ ed length (see Figure 1(b)). The words were processed to include special begin and end of word to- kens and were padded to reach the expected length. I.e. if the maximum word length was 10, the word â
1602.02410#11
1602.02410#13
1602.02410
[ "1512.00103" ]
1602.02410#13
Exploring the Limits of Language Modeling
catâ would be transformed to â $catË â due to the CNN model. Thus, we combine the word and character-level models by feeding a word-level LSTM hidden state h into a small LSTM that predicts the target word one character at a time (see Figure 1(c)). In order to make the whole process rea- sonably efï¬ cient, we train the standard LSTM model un- til convergence, freeze its weights, and replace the stan- dard word-level Softmax layer with the aforementioned character-level LSTM. In our experiments we found that limiting the maximum word length in training to 50 was sufï¬ cient to reach very good results while 32 was clearly insufï¬ cient. We used 256 characters in our vocabulary and the non-ascii symbols were represented as a sequence of bytes. # 4.3. Model Architecture The resulting model scales independently of vocabulary size â both for training and inference. However, it does seem to be worse than regular and CNN Softmax â we are hopeful that further research will enable these models to replace ï¬ xed vocabulary models whilst being computation- ally attractive. # 4. Experiments
1602.02410#12
1602.02410#14
1602.02410
[ "1512.00103" ]
1602.02410#14
Exploring the Limits of Language Modeling
We evaluated many variations of RNN LM architectures. These include the dimensionalities of the embedding lay- ers, the state, projection sizes, and number of LSTM layers to use. Exhaustively trying all combinations would be ex- tremely time consuming for such a large data set, but our ï¬ ndings suggest that LSTMs with a projection layer (i.e., a bottleneck between hidden states as in (Sak et al., 2014)) trained with truncated BPTT (Williams & Peng, 1990) for 20 steps performed well. All experiments were run using the TensorFlow system (Abadi et al., 2015), with the exception of some older mod- els which were used in the ensemble.
1602.02410#13
1602.02410#15
1602.02410
[ "1512.00103" ]
1602.02410#15
Exploring the Limits of Language Modeling
# 4.1. Data Set The experiments are performed on the 1B Word Bench- mark data set introduced by (Chelba et al., 2013), which is a publicly available benchmark for measuring progress of statistical language modeling. The data set contains about 0.8B words with a vocabulary of 793471 words, including sentence boundary markers. All the sentences are shufï¬ ed and the duplicates are removed. The words that are out of vocabulary (OOV) are marked with a special UNK token (there are approximately 0.3% such words). Following (Zaremba et al., 2014) we use dropout (Srivas- tava, 2013) before and after every LSTM layer. The bi- ases of LSTM forget gate were initialized to 1.0 (Jozefow- icz et al., 2015). The size of the models will be described in more detail in the following sections, and the choices of hyper-parameters will be released as open source upon publication. For any model using character embedding CNNs, we closely follow the architecture from (Kim et al., 2015). The only important difference is that we use a larger number of convolutional features of 4096 to give enough capacity to the model. The resulting embedding is then linearly trans- formed to match the LSTM projection sizes. This allows it to match the performance of regular word embeddings but only uses a small fraction of parameters. Exploring the Limits of Language Modeling
1602.02410#14
1602.02410#16
1602.02410
[ "1512.00103" ]
1602.02410#16
Exploring the Limits of Language Modeling
Table 1. Best results of single models on the 1B word benchmark. Our results are shown below previous work. MODEL TEST PERPLEXITY NUMBER OF PARAMS [BILLIONS] SIGMOID-RNN-2048 (JI ET AL., 2015A) INTERPOLATED KN 5-GRAM, 1.1B N-GRAMS (CHELBA ET AL., 2013) SPARSE NON-NEGATIVE MATRIX LM (SHAZEER ET AL., 2015) RNN-1024 + MAXENT 9-GRAM FEATURES (CHELBA ET AL., 2013) 68.3 67.6 52.9 51.3 LSTM-512-512 LSTM-1024-512 LSTM-2048-512 LSTM-8192-2048 (NO DROPOUT) LSTM-8192-2048 (50% DROPOUT) 2-LAYER LSTM-8192-1024 (BIG LSTM) BIG LSTM+CNN INPUTS 54.1 48.2 43.7 37.9 32.2 30.6 30.0 BIG LSTM+CNN INPUTS + CNN SOFTMAX BIG LSTM+CNN INPUTS + CNN SOFTMAX + 128-DIM CORRECTION BIG LSTM+CNN INPUTS + CHAR LSTM PREDICTIONS 39.8 35.8 47.9 4.1 1.76 33 20 0.82 0.82 0.83 3.3 3.3 1.8 1.04 0.29 0.39 0.23 Table 2. Best results of ensembles on the 1B Word Benchmark.
1602.02410#15
1602.02410#17
1602.02410
[ "1512.00103" ]
1602.02410#17
Exploring the Limits of Language Modeling
MODEL TEST PERPLEXITY LARGE ENSEMBLE (CHELBA ET AL., 2013) RNN+KN-5 (WILLIAMS ET AL., 2015) RNN+KN-5 (JI ET AL., 2015A) RNN+SNM10-SKIP (SHAZEER ET AL., 2015) LARGE ENSEMBLE (SHAZEER ET AL., 2015) 43.8 42.4 42.0 41.3 41.0 OUR 10 BEST LSTM MODELS (EQUAL WEIGHTS) OUR 10 BEST LSTM MODELS (OPTIMAL WEIGHTS) 10 LSTMS + KN-5 (EQUAL WEIGHTS) 10 LSTMS + KN-5 (OPTIMAL WEIGHTS) 10 LSTMS + SNM10-SKIP (SHAZEER ET AL., 2015) 26.3 26.1 25.3 25.1 23.7 # 4.4. Training Procedure The models were trained until convergence with an Ada- Grad optimizer using a learning rate of 0.2. In all the exper- iments the RNNs were unrolled for 20 steps without ever resetting the LSTM states. We used a batch size of 128. We clip the gradients of the LSTM weights such that their norm is bounded by 1.0 (Pascanu et al., 2012). We used a large number of negative (or noise) samples: 8192 such samples were drawn per step, but were shared across all the target words in the batch (2560 total, i.e. 128 times 20 unrolled steps). This results in multiplying (2560 x 1024) times (1024 x (8192+1)) (instead of (2560 x 1024) times (1024 x 793471)), i.e. about 100-fold less computa- tion. Using these hyper-parameters we found large LSTMs to be relatively easy to train. The same learning rate was used in almost all of the experiments. In a few cases we had to re- duce it by an order of magnitude. Unless otherwise stated, the experiments were performed with 32 GPU workers and asynchronous gradient updates.
1602.02410#16
1602.02410#18
1602.02410
[ "1512.00103" ]
1602.02410#18
Exploring the Limits of Language Modeling
Further details will be fully speciï¬ ed with the code upon publication. Training a model for such large target vocabulary (793471 words) required to be careful with some details about the approximation to full Softmax using importance sampling. # 5. Results and Analysis In this section we summarize the results of our experiments and do an in-depth analysis. Table 1 contains all results for our models compared to previously published work. Ta- ble 2 shows previous and our own work on ensembles of models. We hope that our encouraging results, which im- proved the best perplexity of a single model from 51.3 to 30.0 (whilst reducing the model size considerably), and set a new record with ensembles at 23.7, will enable rapid re- search and progress to advance Language Modeling.
1602.02410#17
1602.02410#19
1602.02410
[ "1512.00103" ]
1602.02410#19
Exploring the Limits of Language Modeling
For Exploring the Limits of Language Modeling this purpose, we will release the model weights and recipes upon publication. # 5.1. Size Matters Table 3. The test perplexities of an LSTM-2048-512 trained with different losses versus number of epochs. The model needs about 40 minutes per epoch. First epoch is a bit slower because we slowly increase the number of workers. Unsurprisingly, size matters: when training on a very large and complex data set, ï¬ tting the training data with an LSTM is fairly challenging. Thus, the size of the LSTM layer is a very important factor that inï¬ uences the results, as seen in Table 1. The best models are the largest we were able to ï¬
1602.02410#18
1602.02410#20
1602.02410
[ "1512.00103" ]
1602.02410#20
Exploring the Limits of Language Modeling
t into a GPU memory. Our largest model was a 2- layer LSTM with 8192+1024 dimensional recurrent state in each of the layers. Increasing the embedding and projec- tion size also helps but causes a large increase in the num- ber of parameters, which is less desirable. Lastly, training an RNN instead of an LSTM yields poorer results (about 5 perplexity worse) for a comparable model size. EPOCHS NCE IS TRAINING TIME [HOURS] 1 5 10 20 50 97 58 53 49 46.1 60 47.5 45 44 43.7 1 4 8 14 34 Table 4.
1602.02410#19
1602.02410#21
1602.02410
[ "1512.00103" ]
1602.02410#21
Exploring the Limits of Language Modeling
Nearest neighbors in the character CNN embedding space of a few out-of-vocabulary words. Even for words that the model has never seen, the model usually still ï¬ nds reasonable neighbors. # 5.2. Regularization Importance As shown in Table 1, using dropout improves the results. To our surprise, even relatively small models (e.g., single layer LSTM with 2048 units projected to 512 dimensional outputs) can over-ï¬ t the training set if trained long enough, eventually yielding holdout set degradation.
1602.02410#20
1602.02410#22
1602.02410
[ "1512.00103" ]
1602.02410#22
Exploring the Limits of Language Modeling
WORD TOP-1 TOP-2 TOP-3 INCERDIBLE WWW.A.COM 7546 TOWNHAL1 KOMARSKI INCREDIBLE WWW.AA.COM 7646 TOWNHALL KOHARSKI NONEDIBLE WWW.AAA.COM 7534 DJC2 KONARSKI EXTENDIBLE WWW.CA.COM 8566 MOODSWING360 KOMANSKI Using dropout on non-recurrent connections largely miti- gates these issues. While over-ï¬ tting still occurs, there is no more need for early stopping.
1602.02410#21
1602.02410#23
1602.02410
[ "1512.00103" ]
1602.02410#23
Exploring the Limits of Language Modeling
For models that had 4096 or less units in the LSTM layer, we used 10% dropout prob- ability. For larger models, 25% was signiï¬ cantly better. Even with such regularization, perplexities on the training set can be as much as 6 points below test. In one experiment we tried to use a smaller vocabulary comprising of the 100,000 most frequent words and found the difference between train and test to be smaller â which suggests that too much capacity is given to rare words. This is less of an issue with character CNN embedding models as the embeddings are shared across all words. using character-level embeddings is feasible and does not degrade performance â in fact, our best single model uses a Character CNN embedding. An additional advantage is that the number of parameters of the input layer is reduced by a factor of 11 (though training speed is slightly worse). For inference, the embeddings can be precomputed so there is no speed penalty. Overall, the embedding of the best model is parametrized by 72M weights (down from 820M weights). Table 4 shows a few examples of nearest neighbor embed- dings for some out-of-vocabulary words when character CNNs are used.
1602.02410#22
1602.02410#24
1602.02410
[ "1512.00103" ]
1602.02410#24
Exploring the Limits of Language Modeling
# 5.3. Importance Sampling is Data Efï¬ cient # 5.5. Smaller Models with CNN Softmax Table 3 shows the test perplexities of NCE vs IS loss after a few epochs of 2048 unit LSTM with 512 projection. The IS objective signiï¬ cantly improves the speed and the overall performance of the model when compared to NCE. Even with character-level embeddings, the model is still fairly large (though much smaller than the best competing models from previous work). Most of the parameters are in the linear layer before the Softmax: 820M versus a total of 1.04B parameters.
1602.02410#23
1602.02410#25
1602.02410
[ "1512.00103" ]
1602.02410#25
Exploring the Limits of Language Modeling
# 5.4. Word Embeddings vs Character CNN Replacing the embedding layer with a parametrized neural network that process characters of a given word allows the model to consume arbitrary words and is not restricted to a ï¬ xed vocabulary. This property is useful for data sets with conversational or informal text as well as for mor- phologically rich languages. Our experiments show that In one of the experiments we froze the word-LSTM after convergence and replaced the Softmax layer with the CNN Softmax sub-network.
1602.02410#24
1602.02410#26
1602.02410
[ "1512.00103" ]
1602.02410#26
Exploring the Limits of Language Modeling
Without any ï¬ ne-tuning that model was able to reach 39.8 perplexity with only 293M weights (as seen in Table 1). As described in Section 3.2, adding a â correctionâ word embedding term alleviates the gap between regular and Exploring the Limits of Language Modeling CNN Softmax. Indeed, we can trade-off model size versus perplexity. For instance, by adding 100M weights (through a 128 dimensional bottleneck embedding) we achieve 35.8 perplexity (see Table 1). To contrast with the CNN Softmax, we also evaluated a model that replaces the Softmax layer with a smaller LSTM that predicts one character at a time (see Section 3.3). Such a model does not have to learn long dependencies because the base LSTM still operates at the word-level (see Fig- ure 1(c)). With a single-layer LSTM of 1024 units we reached 49.0 test perplexity, far below the best model. In order to make the comparisons more fair, we performed a very expensive marginalization over the words in the vo- cabulary (to rule out words not in the dictionary which the character LSTM would assign some probability). When doing this marginalization, the perplexity improved a bit down to 47.9. ment over previous work. Interestingly, including the best N-gram model reduces the perplexity by 1.2 point even though the model is rather weak on its own (67.6 perplex- ity). Most previous work had to either ensemble with the best N-gram model (as their RNN only used a limited out- put vocabulary of a few thousand words), or use N-gram features as additional input to the RNN. Our results, on the contrary, suggest that N-grams are of limited beneï¬ t, and suggest that a carefully trained LSTM LM is the most competitive model.
1602.02410#25
1602.02410#27
1602.02410
[ "1512.00103" ]
1602.02410#27
Exploring the Limits of Language Modeling
# 5.8. LSTMs are best on the tail words Figure 2 shows the difference in log probabilities between our best model (at 30.0 perplexity) and the KN-5. As can be seen from the plot, the LSTM is better across all the buckets and signiï¬ cantly outperforms KN-5 on the rare words. This is encouraging as it seems to suggest that LSTM LMs may fare even better for languages or data sets where the number of rare words is larger than traditional N-gram models. 25 2.0 Mean difference in log perplexity 5 0.0 # 5.9. Samples from the model To qualitatively evaluate the model, we sampled many sen- tences. We discarded short and politically incorrect ones, but the sample shown below is otherwise â rawâ (i.e., not hand picked). The samples are of high quality â which is not a surprise, given the perplexities attained â but there are still some occasional mistakes. Sentences generated by the ensemble (about 26 perplexity): Words buckets of equal size (less frequent words on the right) Figure 2. The difference in log probabilities between the best LSTM and KN-5 (higher is better). The words from the hold- out set are grouped into 25 buckets of equal size based on their frequencies. # 5.6. Training Speed
1602.02410#26
1602.02410#28
1602.02410
[ "1512.00103" ]
1602.02410#28
Exploring the Limits of Language Modeling
< S > With even more new technologies coming onto the market quickly during the past three years , an increasing number of compa- nies now must tackle the ever-changing and ever-changing environ- mental challenges online . < S > Check back for updates on this breaking news story . < S > About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cort`ege . < S > We are aware of written instructions from the copyright holder not to , in any way , mention Rosenberg â
1602.02410#27
1602.02410#29
1602.02410
[ "1512.00103" ]
1602.02410#29
Exploring the Limits of Language Modeling
s negative comments if they are relevant as indicated in the documents , â eBay said in a statement . < S > It is now known that coffee and cacao products can do no harm on the body . < S > Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence . We used 32 Tesla K40 GPUs to train our models. The smaller version of the LSTM model with 2048 units and 512 projections needs less than 10 hours to reach below 45 perplexity and after only 2 hours of training the model beats previous state-of-the art on this data set. The best model needs about 5 days to get to 35 perplexity and 10 days to 32.5. The best results were achieved after 3 weeks of training.
1602.02410#28
1602.02410#30
1602.02410
[ "1512.00103" ]
1602.02410#30
Exploring the Limits of Language Modeling
See Table 3 for more details. # 5.7. Ensembles We averaged several of our best models and we were able to reach 23.7 test perplexity (more details and results can be seen in Table 2), which is more than 40% improve- # 6. Discussion and Conclusions In this paper we have shown that RNN LMs can be trained on large amounts of data, and outperform competing mod- els including carefully tuned N-grams. The reduction in perplexity from 51.3 to 30.0 is due to several key compo- nents which we studied in this paper. Thus, a large, regular- ized LSTM LM, with projection layers and trained with an approximation to the true Softmax with importance sam- pling performs much better than N-grams. Unlike previ- ous work, we do not require to interpolate both the RNN LM and the N-gram, and the gains of doing so are rather marginal. Exploring the Limits of Language Modeling By exploring recent advances in model architectures (e.g. LSTMs), exploiting small character CNNs, and by sharing our ï¬ ndings in this paper and accompanying code and mod- els (to be released upon publication), we hope to inspire research on large scale Language Modeling, a problem we consider crucial towards language understanding. We hope for future research to focus on reasonably sized datasets taking inspiration from recent advances seen in the com- puter vision community thanks to efforts such as Imagenet (Deng et al., 2009).
1602.02410#29
1602.02410#31
1602.02410
[ "1512.00103" ]
1602.02410#31
Exploring the Limits of Language Modeling
Jean- S´ebastien, Morin, Fr´ederic, and Gauvain, Jean-Luc. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137â 186. Springer, 2006. Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. # Acknowledgements We thank Ciprian Chelba, Ilya Sutskever, and the Google Brain Team for their help and discussions. We also thank Koray Kavukcuoglu for his help with the manuscript. Cho, Kyunghyun, Van Merri¨enboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua.
1602.02410#30
1602.02410#32
1602.02410
[ "1512.00103" ]
1602.02410#32
Exploring the Limits of Language Modeling
Learning phrase represen- tations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. # References Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghe- mawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irv- ing, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man´e, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi´egas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang.
1602.02410#31
1602.02410#33
1602.02410
[ "1512.00103" ]
1602.02410#33
Exploring the Limits of Language Modeling
TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015. URL http://tensorflow. org/. Software available from tensorï¬ ow.org. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Imagenet: A large-scale hierarchical and Fei-Fei, Li. image database. In Computer Vision and Pattern Recog- nition, 2009. CVPR 2009. IEEE Conference on, pp. 248â 255. IEEE, 2009. Filippova, Katja, Alfonseca, Enrique, Colmenares, Car- los A, Kaiser, Lukasz, and Vinyals, Oriol.
1602.02410#32
1602.02410#34
1602.02410
[ "1512.00103" ]
1602.02410#34
Exploring the Limits of Language Modeling
Sentence com- pression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pp. 360â 368, 2015. Gers, Felix A, Schmidhuber, J¨urgen, and Cummins, Fred. Learning to forget: Continual prediction with lstm. Neu- ral computation, 12(10):2451â 2471, 2000. Gillick, Dan, Brunk, Cliff, Vinyals, Oriol, and Subra- manya, Amarnag. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103, 2015. Arisoy, Ebru, Sainath, Tara N, Kingsbury, Brian, and Ram- abhadran, Bhuvana. Deep neural network language mod- els. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pp. 20â
1602.02410#33
1602.02410#35
1602.02410
[ "1512.00103" ]
1602.02410#35
Exploring the Limits of Language Modeling
28. As- sociation for Computational Linguistics, 2012. Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Framewise phoneme classiï¬ cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5): 602â 610, 2005. Ballesteros, Miguel, Dyer, Chris, and Smith, Noah A. Improved transition-based parsing by modeling char- arXiv preprint acters instead of words with lstms. arXiv:1508.00657, 2015.
1602.02410#34
1602.02410#36
1602.02410
[ "1512.00103" ]
1602.02410#36
Exploring the Limits of Language Modeling
Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In International Con- ference on Artiï¬ cial Intelligence and Statistics, pp. 297â 304, 2010. Bengio, Yoshua and Sen´ecal, Jean-S´ebastien. Adaptive im- portance sampling to accelerate training of a neural prob- abilistic language model. Neural Networks, IEEE Trans- actions on, 19(4):713â 722, 2008. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â 1780, 1997. Bengio, Yoshua, Sen´ecal, Jean-S´ebastien, et al. Quick training of probabilistic neural nets by importance sam- pling. In AISTATS, 2003. Ji, Shihao, Vishwanathan, S. V. N., Satish, Nadathur, An- derson, Michael J., and Dubey, Pradeep. Blackout:
1602.02410#35
1602.02410#37
1602.02410
[ "1512.00103" ]
1602.02410#37
Exploring the Limits of Language Modeling
Speeding up recurrent neural network language models Exploring the Limits of Language Modeling with very large vocabularies. CoRR, abs/1511.06909, URL http://arxiv.org/abs/1511. 2015a. 06909. Mikolov, Tomas and Zweig, Geoffrey. Context dependent In SLT, pp. recurrent neural network language model. 234â 239, 2012. Ji, Yangfeng, Cohn, Trevor, Kong, Lingpeng, Dyer, Chris, and Eisenstein, Jacob. Document context language mod- els. arXiv preprint arXiv:1511.03962, 2015b. Jozefowicz, Rafal, Zaremba, Wojciech, and Sutskever, Ilya. An empirical exploration of recurrent network ar- In Proceedings of the 32nd International chitectures. Conference on Machine Learning (ICML-15), pp. 2342â 2350, 2015. Mikolov, Tomas, Karaï¬ Â´at, Martin, Burget, Lukas, Cer- nock`y, Jan, and Khudanpur, Sanjeev.
1602.02410#36
1602.02410#38
1602.02410
[ "1512.00103" ]
1602.02410#38
Exploring the Limits of Language Modeling
Recurrent neural network based language model. In INTERSPEECH, vol- ume 2, pp. 3, 2010. Mikolov, Tomas, Deoras, Anoop, Kombrink, Stefan, Bur- get, Lukas, and Cernock`y, Jan. Empirical evaluation and combination of advanced language modeling techniques. In INTERSPEECH, number s 1, pp. 605â 608, 2011. Kalchbrenner, Nal, Grefenstette, Edward, and Blunsom, Phil.
1602.02410#37
1602.02410#39
1602.02410
[ "1512.00103" ]
1602.02410#39
Exploring the Limits of Language Modeling
A convolutional neural network for modelling sen- tences. arXiv preprint arXiv:1404.2188, 2014. Mnih, Andriy and Hinton, Geoffrey E. A scalable hierar- chical distributed language model. In Advances in neural information processing systems, pp. 1081â 1088, 2009. Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015. Kneser, Reinhard and Ney, Hermann. Improved backing- off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 Inter- national Conference on, volume 1, pp. 181â 184. IEEE, 1995. Mnih, Andriy and Kavukcuoglu, Koray.
1602.02410#38
1602.02410#40
1602.02410
[ "1512.00103" ]
1602.02410#40
Exploring the Limits of Language Modeling
Learning word embeddings efï¬ ciently with noise-contrastive estima- tion. In Advances in Neural Information Processing Sys- tems, pp. 2265â 2273, 2013. Morin, Frederic and Bengio, Yoshua. Hierarchical proba- bilistic neural network language model. In Aistats, vol- ume 5, pp. 246â 252. Citeseer, 2005. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua.
1602.02410#39
1602.02410#41
1602.02410
[ "1512.00103" ]
1602.02410#41
Exploring the Limits of Language Modeling
On the difï¬ culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012. Le Cun, B Boser, Denker, John S, Henderson, D, Howard, Richard E, Hubbard, W, and Jackel, Lawrence D. Hand- written digit recognition with a back-propagation net- work. In Advances in neural information processing sys- tems. Citeseer, 1990. Ling, Wang, Lu´ıs, Tiago, Marujo, Lu´ıs, Astudillo, Ram´on Fernandez, Amir, Silvio, Dyer, Chris, Black, Alan W, and Trancoso, Isabel.
1602.02410#40
1602.02410#42
1602.02410
[ "1512.00103" ]
1602.02410#42
Exploring the Limits of Language Modeling
Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096, 2015. Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. arXiv preprint arXiv:1509.00685, 2015. Sak, Hasim, Senior, Andrew W, and Beaufays, Franc¸oise. Long short-term memory recurrent neural network archi- In INTER- tectures for large scale acoustic modeling. SPEECH, pp. 338â 342, 2014. Schuster, Mike and Paliwal, Kuldip K. Bidirectional recur- rent neural networks. Signal Processing, IEEE Transac- tions on, 45(11):2673â 2681, 1997. Luong, Minh-Thang, Sutskever, Ilya, Le, Quoc V, Vinyals, Oriol, and Zaremba, Wojciech. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014. Marcus, Mitchell P, Marcinkiewicz, Mary Ann, and San- torini, Beatrice. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â 330, 1993. Mikolov, Tom´aË s.
1602.02410#41
1602.02410#43
1602.02410
[ "1512.00103" ]
1602.02410#43
Exploring the Limits of Language Modeling
Statistical language models based on neu- ral networks. Presentation at Google, Mountain View, 2nd April, 2012. Schwenk, Holger, Rousseau, Anthony, and Attik, Mo- hammed. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Fu- ture of Language Modeling for HLT, pp. 11â
1602.02410#42
1602.02410#44
1602.02410
[ "1512.00103" ]
1602.02410#44
Exploring the Limits of Language Modeling
19. Associ- ation for Computational Linguistics, 2012. Serban, Iulian Vlad, Sordoni, Alessandro, Bengio, Yoshua, Courville, Aaron C., and Pineau, Joelle. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808, 2015. URL http://arxiv. org/abs/1507.04808. Exploring the Limits of Language Modeling Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Sparse non-negative matrix language modeling for skip- grams. Proceedings of Interspeech, pp. 1428â 1432, 2015. Srivastava, Nitish.
1602.02410#43
1602.02410#45
1602.02410
[ "1512.00103" ]
1602.02410#45
Exploring the Limits of Language Modeling
Improving neural networks with dropout. PhD thesis, University of Toronto, 2013. Srivastava, Nitish, Mansimov, Elman, and Salakhutdinov, Ruslan. Unsupervised learning of video representations using lstms. arXiv preprint arXiv:1502.04681, 2015a. Srivastava, Rupesh K, Greff, Klaus, and Schmidhuber, In Advances in J¨urgen. Training very deep networks. Neural Information Processing Systems, pp. 2368â 2376, 2015b. Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E.
1602.02410#44
1602.02410#46
1602.02410
[ "1512.00103" ]
1602.02410#46
Exploring the Limits of Language Modeling
Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning (ICML-11), pp. 1017â 1024, 2011. Se- In quence to sequence learning with neural networks. Advances in neural information processing systems, pp. 3104â 3112, 2014. Vaswani, Ashish, Zhao, Yinggong, Fossum, Victoria, and Chiang, David.
1602.02410#45
1602.02410#47
1602.02410
[ "1512.00103" ]
1602.02410#47
Exploring the Limits of Language Modeling
Decoding with large-scale neural lan- guage models improves translation. Citeseer. Vincent, Pascal, de Br´ebisson, Alexandre, and Bouthillier, Xavier. Efï¬ cient exact gradient update for training deep networks with very large sparse targets. In Advances in Neural Information Processing Systems, pp. 1108â 1116, 2015. Vinyals, Oriol and Le, Quoc. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015. Wang, Tian and Cho, Kyunghyun. Larger-context language modelling. arXiv preprint arXiv:1511.03729, 2015. Williams, Ronald J and Peng, Jing.
1602.02410#46
1602.02410#48
1602.02410
[ "1512.00103" ]
1602.02410#48
Exploring the Limits of Language Modeling
An efï¬ cient gradient- based algorithm for on-line training of recurrent network trajectories. Neural computation, 2(4):490â 501, 1990. Williams, Will, Prasad, Niranjani, Mrva, David, Ash, Tom, and Robinson, Tony. Scaling recurrent neural network language models. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2015 IEEE International Conference on, pp. 5391â 5395. IEEE, 2015.
1602.02410#47
1602.02410#49
1602.02410
[ "1512.00103" ]
1602.02410#49
Exploring the Limits of Language Modeling
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
1602.02410#48
1602.02410
[ "1512.00103" ]
1602.01783#0
Asynchronous Methods for Deep Reinforcement Learning
6 1 0 2 n u J 6 1 ] G L . s c [ 2 v 3 8 7 1 0 . 2 0 6 1 : v i X r a # Asynchronous Methods for Deep Reinforcement Learning [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Volodymyr Mnih1 Adrià Puigdomènech Badia1 Mehdi Mirza1,2 Alex Graves1 Tim Harley1 Timothy P. Lillicrap1 David Silver1 Koray Kavukcuoglu 1 1 Google DeepMind 2 Montreal Institute for Learning Algorithms (MILA), University of Montreal # Abstract and We lightweight framework for deep reinforce- ment learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
1602.01783#1
1602.01783
[ "1509.02971" ]
1602.01783#1
Asynchronous Methods for Deep Reinforcement Learning
# 1. Introduction Deep neural networks provide rich representations that can enable reinforcement learning (RL) algorithms to perform effectively. However, it was previously thought that the combination of simple online RL algorithms with deep neural networks was fundamentally unstable. Instead, a va- riety of solutions have been proposed to stabilize the algo- rithm (Riedmiller, 2005; Mnih et al., 2013; 2015; Van Has- selt et al., 2015; Schulman et al., 2015a). These approaches share a common idea: the sequence of observed data en- countered by an online RL agent is non-stationary, and on- Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s).
1602.01783#0
1602.01783#2
1602.01783
[ "1509.02971" ]