doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1702.03118
25
Figure 4: Learning curves for a dSiLU network agent with 250 hidden nodes in 10×10 Tetris. The figure shows the average score over five separate runs (tick solid lines) and the scores of individual runs (thin dashed lines). The red dashed line show the previous best average score of 4,200 points achieved by the CBMPI algorithm. 10×10 Tetris is played with the standard seven tetrominoes and the numbers of actions are 9 for the block-shaped tetromino, 17 for the S-, Z-, and stick-shaped tetrominoes, and 34 for the J-, L- and T-shaped tetrominoes. In each time step, the agent gets a score equal to the number of completed rows, with a maximum of +4 points that can only be achieved by the stick-shaped tetromino. We trained a shallow neural network agent with dSiLU units in the hidden layer. To handle the more complex learning task, we increased the number of hidden units to 250 and the number of episodes to 400,000. We repeated the experiment for five separate runs. We used the same 20 state features as in the SZ-Tetris experiment, but the length of the binary state vector was reduced to 260 due to the smaller board size. The reward function was changed as follows for the same reason:
1702.03118#25
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
26
Base on this assumption, we present INQ which incorporates three interdependent operations: weight partition, group-wise quantization and re-training. Weight partition is to divide the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play comple- mentary roles in our INQ. The weights in the first group are responsible for forming a low-precision base for the original model, thus they are quantized by using Equation (4). The weights in the second group adapt to compensate for the loss in model accuracy, thus they are the ones to be re-trained. Once the first run of the quantization and re-training operations is finished, all the three operations are further conducted on the second weight group in an iterative manner, until all the weights are converted to be either powers of two or zero, acting as an incremental network quantization and accuracy enhancement procedure. As a result, accuracy loss under low-precision CNN quantization can be well suppressed by our INQ. Illustrative results at iterative steps of our INQ are provided in Figure 2. For the lth layer, weight partition can be defined as A(1) l ∪ A(2) l = {Wl(i, j)}, and A(1) l ∩ A(2) l = ∅, (5)
1702.03044#26
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
26
r(s) = e −(number of holes in s)/(33/2). (17) We used the same values of the meta-parameters as in the stochastic SZ-Tetris experiment. Figure 4 shows The average learning curve in 10×10 Tetris, as well as learning curves for the five separate runs. The dSiLU network agent reached an average score of 4,900 points over the final 10,000 episodes and the five separate runs, which is a new state-of-the-art in 10×10 Tetris. The previous best average scores are 4,200 points achieved by the CBMPI algorithm, 3,400 points achieved by the DPI algorithm, and 3,000 points achieved by the CE method (Gabillon et al., 2013). The best individual run achieved a final mean score of 9 5,300 points, which is also a new state-of-the-art, improving on the score of 5,000 points achieved by the CBMPI algorithm.
1702.03118#26
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
27
where A(1) denotes the first weight group that needs to be quantized, and A2 denotes the other weight group that needs to be re-trained. We leave the strategies for group partition to be chosen in the experiment section. Here, we define a binary matrix Tl to help distinguish above two categories of weights. That is, Tl(i, j) = 0 means Wl(i, j) ∈ A(1) , and Tl(i, j) = 1 means Wl(i, j) ∈ A(2) 5 Published as a conference paper at ICLR 2017 INCREMENTAL NETWORK QUANTIZATION ALGORITHM Now, we come to the training method. Taking the lth layer as an example, the basic optimization problem of making its weights to be either powers of two or zero can be expressed as E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) ∈ Pl, 1 ≤ l ≤ L, (6)
1702.03044#27
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
27
9 5,300 points, which is also a new state-of-the-art, improving on the score of 5,000 points achieved by the CBMPI algorithm. It is particularly impressive that the dSiLU network agent achieved its result using features similar to the original Bertsekas features. Using only the Bertsekas features, the CBMPI algorithm, the DPI algorithm, and the CE method could only achieve average scores of about 500 points (Gabillon et al., 2013). The CE method has achieved its best score by combining the Bertsekas features, the Dellacherie features (Fahey, 2003), and three original features (Thiery and Scherrer, 2009). The CBMPI algorithm achieved its best score using the same features as the CE method, except for using five original RBF height features instead of the Bertsekas features. # 3.3 Atari 2600 games
1702.03118#27
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
28
where L(Wl) is the network loss, R(Wl) is the regularization term, λ is a positive coefficient, and the constraint term indicates each weight entry Wl(i, j) should be chosen from the set Pl consisting of a fixed number of the values of powers of two plus zero. Direct solving above optimization problem in training from scratch is challenging since it is very easy to undergo convergence problem. By performing weight partition and group-wise quantization operations beforehand, the optimiza- tion problem defined in (6) can be reshaped into a easier version. That is, we only need to optimize the following objective function E(Wl) = L(Wl) + λR(Wl) min Wl s.t. Wl(i, j) ∈ Pl, if Tl(i, j) = 0, 1 ≤ l ≤ L, (7) where Pl is determined at group-wise quantization operation, and the binary matrix Tl acts as a mask which is determined by weight partition operation. Since Pl and Tl are known, the optimiza- tion problem (7) can be solved using popular stochastic gradient decent (SGD) method. That is, in INQ, we can get the update scheme for the re-training as
1702.03044#28
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
28
# 3.3 Atari 2600 games To further evaluate the use of value-based on-policy reinforcement learning with eligibil- ity traces and softmax action selection in high-dimensional state space domains, as well as the use of SiLU and dSiLU units, we applied Sarsa(λ) with a deep convolution neural network function approximator in the Atari 2600 domain using the Arcade Learning Envi- ronment (Bellemare et al., 2013). Based on the results for the deep networks in SZ-Tetris, we used SiLU-dSiLU networks with SiLU units in the convolutional layers and dSiLU units in the fully-connected layer. To limit the number of games and prevent a biased selection of the games, we selected the 12 games played by DQN (Mnih et al., 2015) that started with the letters ’A’ and ’B’: Alien, Amidar, Assault, Asterix, Asteroids, Atlantis, Bank Heist, Battle Zone, Beam Rider, Bowling, Boxing, and Breakout.
1702.03118#28
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03118
29
We used a similar experimental setup as Mnih et al. (2015). We pre-processed the raw 210×160 Atari 2600 RGB frames by extracting the luminance channel, taking the maxi- mum pixel values over consecutive frames to prevent flickering, and then downsampling the grayscale images to 105×80. For computational reasons, we used a smaller network architec- ture. Instead of three convolutional layers, we used two with half the number of filters, each followed by a max-pooling layer. The input to the network was a 105×80×2 image consist- ing of the current and the fourth previous pre-processed frame. As we used frame skipping where actions were selected every fourth frame and repeated for the next four frames, we only needed to apply pre-processing to every fourth frame. The first convolutional layer had 16 filters of size 8×8 with a stride of 4. The second convolutional layer had 32 filters of size 4×4 with a stride of 2. The max-pooling layers had pooling windows of size 3×3 with a stride of 2. The convolutional layers were followed by a fully-connected hidden layer with 512 dSiLU units
1702.03118#29
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
30
where γ is a positive learning rate. Note that the binary matrix Tl forces zero update to the weights that have been quantized. That is, only the weights still keep with floating-point values are updated, akin to the latest pruning methods (Han et al., 2015; Guo et al., 2016) in which only the weights that are not currently removed are re-trained to enhance network accuracy. The whole procedure of our INQ is summarized as Algorithm 1. We would like to highlight that the merits of our INQ are in three aspects: (1) Weight partition in- troduces the importance-aware weight quantization. (2) Group-wise weight quantization introduces much less accuracy loss than simultaneously quantizing all the network weights, thus making re- training have larger room to recover model accuracy. (3) By integrating the operations of weight partition, group-wise quantization and re-training into a nested loop, our INQ has the potential to obtain lossless low-precision CNN model from the pre-trained full-precision reference.
1702.03044#30
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
30
layers had pooling windows of size 3×3 with a stride of 2. The convolutional layers were followed by a fully-connected hidden layer with 512 dSiLU units and a fully-connected linear output layer with 4 to 18 output (or action-value) units, depending on the number of valid actions in the considered game. We selected meta-parameters by a preliminary search in the Alien, Amidar and Assault games and used the same values for all 12 games: α: 0.001, γ: 0.99, λ: 0.8, τ0: 0.5, and τk: 0.0005. As in Mnih et al. (2015), we clipped the rewards to be between −1 and +1, but we did not clip the values of the TD-errors.
1702.03118#30
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
31
# Algorithm 1 Incremental network quantization for lossless CNNs with low-precision weights. the training data, {Wl Input: X: : 1 ≤ l ≤ L}: {σ1, σ2, · · · , σN }: the accumulated portions of weights quantized at iterative steps the pre-trained full-precision CNN model, Output: { Wl : 1 ≤ l ≤ L}: the final low-precision model with the weights constrained to be either powers of two or zero # c l ← ∅, A(2) 1: Initialize A(1) 2: for n = 1, 2, . . . , N do 3: l ← {Wl(i, j)}, Tl ← 1, for 1 ≤ l ≤ L 1: Initialize A? — 0, AP &— (Wi(i,f)}, T) <1, for 1 <1< L Reset the base learning rate and the learning policy According to σn, perform layer-wise weight partition and update A(1) Based on A(1) Quantize the weights in A(1) by Equation (4) layer-wisely Calculate feed-forward loss, and update weights in {A(2) , A(2) l # and Tl 4 , determine Pl layer-wisely 5: 6:
1702.03044#31
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
31
In each of the 12 Atari games, we trained a SiLU-dSiLU agent for 200,000 episodes and the experiments were repeated for two separate runs. An episode started with up to 30 10 Alien === DON on Geib oy 40 an sore sine) Me 1000 = Bw Episodes (1005) = Bw m0 0 Episodes (1005) 160 10! Asterix a Ta 00) a0 Episocks (100) 160 m0 0 Atlantis = 0 Episodes (1008) Bank heist, Battle zone ween a0 rr Episocks (1008) 100 a0 Beam rider a0 DO Bpisoxks (1005) = Dw 20 0 Episodes (1008) 10 Bowling a a0 ron 00 Bpisoxks (1005) ron m0 Breakout a DO Epibodes (1003) a DO m0 “oO Epibodes (1003) 10 a Dw Ton 200) Episodes (1005) 10
1702.03118#31
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03118
32
0 0 Figure 5: Average learning curves (solid lines) over two separate runs (dashed lines) for the SiLU-dSiLU agents in the 12 Atari games. The dotted lines show the reported results for DQN (red), the Gorila implementation of DQN (green), and double DQN (blue). ’do nothing’ actions (no-op condition) and it was played until the end of the game or for a maximum of 18,000 frames (i.e., 5 minutes). Figure 5 shows the average learning curves, as well as the learning curves for the two separate runs, in the 12 Atari 2600 games. Table 2 summarizes the results as the final mean scores computed over the final 100 episodes for the two separate runs, and the best mean scores computed as average scores of the highest mean 11
1702.03118#32
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
33
: 1 ≤ l ≤ L} by Equation (8) 7: 8: end for l 6 Published as a conference paper at ICLR 2017 # 3 EXPERIMENTAL RESULTS To analyze the performance of our INQ, we perform extensive experiments on the ImageNet large scale classification task, which is known as the most challenging image classification benchmark so far. ImageNet dataset has about 1.2 million training images and 50 thousand validation images. Each image is annotated as one of 1000 object classes. We apply our INQ to AlexNet, VGG-16, GoogleNet, ResNet-18 and ResNet-50, covering almost all known deep CNN architectures. Using the center crops of validation images, we report the results with two standard measures: top-1 error rate and top-5 error rate. For fair comparison, all pre-trained full-precision (i.e., 32-bit floating- point) CNN models except ResNet-18 are taken from the Caffe model zoo2. Note that He et al. (2016) do not release their pre-trained ResNet-18 model to the public, so we use a publicly available re-implementation by Facebook3. Since our method is implemented with Caffe, we make use of an open source tool4 to convert the pre-trained ResNet-18 model from Torch to Caffe.
1702.03044#33
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
33
scores (over every 100 episodes) achieved in each of the two runs. The table also shows the reported best mean scores for single runs of DQN computed over 30 episodes, average scores over five separate runs of the Gorila implementation of DQN (Nair et al., 2015) computed over 30 episodes, and single runs of double DQN (van Hasselt et al., 2015) computed over 100 episodes. The last two rows of the table shows summary statistics over the 12 games, which were obtained by computing the mean and the median of the DQN normalized scores: ScoreDQN_normalized = Scoreagent − Scorerandom ScoreDQN − Scorerandom
1702.03118#33
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
34
3.1 RESULTS ON IMAGENET Table 1: Our INQ well converts diverse full-precision deep CNN models (including AlexNet, VGG- 16, GoogleNet, ResNet-18 and ResNet-50) to 5-bit low-precision versions with consistently im- proved model accuracy. Network AlexNet ref AlexNet VGG-16 ref VGG-16 GoogleNet ref GoogleNet ResNet-18 ref ResNet-18 ResNet-50 ref ResNet-50 Bit-width Top-1 error Top-5 error Decrease in top-1/top-5 error 32 5 32 5 32 5 32 5 32 5 42.76% 42.61% 31.46% 29.18% 31.11% 30.98% 31.73% 31.02% 26.78% 25.19% 19.77% 19.54% 11.35% 9.70% 10.97% 10.72% 11.31% 10.90% 8.76% 7.55% 0.15%/0.23% 2.28%/1.65% 0.13%/0.25% 0.71%/0.41% 1.59%/1.21%
1702.03044#34
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
35
Setting expected bit-width to 5, the first set of experiments is performed to testify the efficacy of our INQ on different CNN architectures. Regarding weight partition, there are several candidate strate- gies as we tried in our previous work for efficient network pruning (Guo et al., 2016). In Guo et al. (2016), we found random partition and pruning-inspired partition are the two best choices compared with the others. Thus in this paper, we directly compare these two strategies for weight partition. In random strategy, the weights in each layer of any pre-trained full-precision deep CNN model are randomly split into two disjoint groups. In pruning-inspired strategy, the weights are divided into two disjoint groups by comparing their absolute values with layer-wise thresholds which are auto- matically determined by a given splitting ratio. Here we directly use pruning-inspired strategy and the experimental results in Section 3.2 will show why. After the re-training with no more than 8 epochs over each pre-trained full-precision model, we obtain the results as shown in Table 1. It can be concluded that the 5-bit CNN models generated by our INQ show consistently improved top-1 and top-5 recognition rates compared with respective full-precision references. Parameter settings are described below.
1702.03044#35
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
35
SiLU-dSiLU Best DQN Gorila double DQN Final 3,069 740 3,359 6,012 1,629 85,950 430 26,300 6,846 42 72 401 Game 2,246 1,370 904 762 2,944 2,415 100,322 70,942 10,614 6,537 127,651 128,983 2,621 1,190 1,450 6,433 1,048 100,069 609 25,267 3,303 54 95 402 100 % 102 % 100 % 104 % Alien Amidar Assault Asterix Asteroids Atlantis Bank Heist Battle Zone Beam Rider Bowling Boxing Breakout Mean (DQN Normalized) Median (DQN Normalized) 2,907 702 5,023 15,150 931 64,758 728 25,730 7,654 71 82 375 127 % 105 % 770 29,115 2,176 75 92 55 332 % 125 % 5 22,930 1,829 67 36 25 218 % 78 %
1702.03118#35
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
36
AlexNet: AlexNet has 5 convolutional layers and 3 fully-connected layers. We set the accumulated portions of quantized weights at iterative steps as {0.3, 0.6, 0.8, 1}, the batch size as 256, the weight decay as 0.0005, and the momentum as 0.9. VGG-16: Compared with AlexNet, VGG-16 has 13 convolutional layers and more parameters. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9.
1702.03044#36
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
36
The results clearly show that our SiLU-dSiLU agent outperformed the other agents, improving the mean (median) DQN normalized best mean score from 127 % (105 %) achieved by double DQN to 332 % (125 %). The SiLU-dSiLU agents achieved the highest best mean score in 6 out of the 12 games and only performed much worse than the other 3 agents in one game, Breakout, where the learning never took off during the 200,000 episodes of training (see Figure 5). The performance was especially impressive in the Asterix (score of 100,322) and Asteroids (score of 10,614) games, which improved the best mean performance achieved by the second-best agent by 562 % and 552 %, respectively. 12 # 4 Analysis # 4.1 Value estimation First, we investigate the ability of TD(λ) and Sarsa(λ) to accurately estimate discounted returns: # T −t Rt = X k=0 γkrt+k.
1702.03118#36
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
37
# 2https://github.com/BVLC/caffe/wiki/Model-Zoo 3https://github.com/facebook/fb.resnet.torch/tree/master/pretrained 4https://github.com/zhanghang1989/fb-caffe-exts 7 Published as a conference paper at ICLR 2017 GoogleNet: Compared with AlexNet and VGG-16, GoogleNet is more difficult to quantize due to a smaller number of parameters and the increased network width. We set the accumulated portions of quantized weights at iterative steps as {0.2, 0.4, 0.6, 0.8, 1}, the batch size as 80, the weight decay as 0.0002, and the momentum as 0.9.
1702.03044#37
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
37
# T −t Rt = X k=0 γkrt+k. Here T is the length of an episode. The reason for doing this is that van Hasselt et al. (2015) showed that the double DQN algorithm improved the performance of DQN in Atari 2600 games by reducing the overestimation of the action values. It is known (Thrun and Schwartz, 1993; van Hasselt, 2010) that Q-learning based algorithms, such as DQN, can overestimate action values due to the max operator, which is used in the computation of the learning targets. TD(λ) and Sarsa(λ) do not use the max operator to compute the learning targets and they should therefore not suffer from this problem. 90 * T Ve a ~ Lea (ss) — Pe 80 " 10 T —R, * Li 70 — Linear fit 60 50 40 0 30 ow 20 “5 10 0 * 0 200 400 600 800 1000 1200 400 600 800 1000 1200 1400 Time step, t Episode length, T
1702.03118#37
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
38
ResNet-18: Different from above three networks, ResNets have batch normalization layers and relief the vanishing gradient problem by using shortcut connections. We first test the 18-layer version for exploratory purpose and test the 50-layer version later on. The network architectures of ResNet- 18 and ResNet-34 are very similar. The only difference is the number of filters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 80, the weight decay as 0.0005, and the momentum as 0.9. ResNet-50: Besides significantly increased network depth, ResNet-50 has a more complex network architecture in comparison to ResNet-18. However, regarding network architecture, ResNet-50 is very similar to ResNet-101 and ResNet-152. The only difference is the number of filters in every convolutional layer. We set the accumulated portions of quantized weights at iterative steps as {0.5, 0.75, 0.875, 1}, the batch size as 32, the weight decay as 0.0005, and the momentum as 0.9. 3.2 ANALYSIS OF WEIGHT PARTITION STRATEGIES
1702.03044#38
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
38
t Figure 6: The left panel shows learned V (st)-values and Rt-values, for examples of short, medium-long, and long episodes in SZ-Tetris. The right panel shows the normalized sum of differences between V (st) and Rt for 1,000 episodes and the best linear fit of the data (−0.012T + 9.8). Figure 6 shows that for episodes of average (or expected) length the best dSiLU network agent in SZ-Tetris learned good estimates of the discounted returns, both along the episodes (left panel) and as measured by the normalized sum of differences between V (st) and Rt (right panel): 1 T X T t=1 (V (st) − Rt) . The linear fit of the normalized sum of differences data for 1,000 episodes gives a small underestimation (-0.43) for an episode of average length (866 time steps). The V (st)-values overestimated the discounted returns for short episodes and underestimated the discounted 13 returns for long episodes (especially in the middle part of the episodes), which is accurate since the episodes ended earlier and later, respectively, than were expected.
1702.03118#38
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
39
In our INQ, the first operation is weight partition whose result will directly affect the following group-wise quantization and re-training operations. Therefore, the second set of experiments is conducted to analyze two candidate strategies for weight partition. As mentioned in the previous section, we use pruning-inspired strategy for weight partition. Unlike random strategy in which all the weights have equal probability to fall into the two disjoint groups, pruning-inspired strategy considers that the weights with larger absolute values are more important than the smaller ones to form a low-precision base for the original CNN model. We use ResNet-18 as a test case to compare the performance of these two strategies. In the experiments, the parameter settings are completely the same as described in Section 3.1. We set 4 epochs for weight re-training. Table 2 summarizes the results of our INQ with 5-bit quantization. It can be seen that our INQ achieves top-1 error rate of 32.11% and top-5 error rate of 11.73% by using random partition. Comparatively, pruning-inspired partition brings 1.09% and 0.83% decrease in top-1 and top-5 error rates, respectively. Apparently, pruning-inspired
1702.03044#39
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
41
Figure 7 shows typical examples of learned action values and discounted returns along episodes where the best SiLU-dSiLU agents in Asterix (score of 108,500) and Asteroids (score of 22,500) successfully played for the full 18,000 frames (i.e., 4,500 time steps since the agents acted every fourth frame). In both games, with the exception of a few smaller parts, the learned action values matched the discounted returns very well along the whole episodes. The normalized sums of differences (absolute differences) were 0.59 (1.05) in the Asterix episode and −0.23 (1.28) in the Asteroids episode. In both games, the agents overestimated action values at the end of the episodes. However, this is an artifact of that an episode ended after a maximum of 4,500 time steps, which the agents could not predict. Videos of the corresponding learned behaviors in Asterix and Asteroids can be found at http://www.cns.atr.jp/~elfwing/videos/asterix_deep_SiL.mov and http://www.cns.atr.jp/~elfwing/videos/asteroids_deep_SiL.mov. # 4.2 Action selection
1702.03118#41
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
42
The third set of experiments is performed to explore the limit of the expected bit-width under which our INQ can still achieve lossless network quantization. Similar to the second set of experiments, we also use ResNet-18 as a test case, and the parameter settings for the batch size, the weight decay and the momentum are completely the same. Finally, lower-precision models with 4-bit, 3-bit and even 2-bit ternary weights are generated for comparisons. As the expected bit-width goes down, the number of candidate quantum values will be decreased significantly, thus we shall increase the number of iterative steps accordingly for enhancing the accuracy of final low-precision model. Specifically, we set the accumulated portions of quantized weights at iterative steps as {0.3, 0.5, 0.8, 0.9, 0.95, 1}, {0.2, 0.4, 0.6, 0.7, 0.8, 0.9, 0.95, 1} and {0.2, 0.4, 0.6, 0.7, 0.8, 0.85, 0.9, 0.95, 0.975, 1} for 4-bit, 3-bit
1702.03044#42
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
43
0.6, 0.7, 0.8, 0.85, 0.9, 0.95, 0.975, 1} for 4-bit, 3-bit and 2-bit ternary models, respectively. The required number of epochs also increases when the expected bit-width goes down, and it reaches 30 when training our 2-bit ternary model. Although our 4-bit model shows slightly decreased accuracy when compared with the 5-bit model, its accuracy is still better than that of the pre-trained full-precision model. Comparatively, even when the expected bit-width goes down to 3, our low-precision model shows only 0.19% and
1702.03044#43
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
43
Table 3: Mean scores and average numbers of exploratory actions for softmax action selection and ε-greedy action selection with ε set to 0, 0.001, 0.01, and 0.05. Game Selection Mean Exploratory Score actions 326 332 260 71 14 104,299 102,890 98,264 66,113 7,152 15,833 15,091 11,105 3,536 1,521 τ = 0.0098 ǫ = 0 ε = 0.001 ε = 0.01 ε = 0.05 τ = 0.00495 ǫ = 0 ε = 0.001 ε = 0.01 ε = 0.05 τ = 0.00495 ǫ = 0 ε = 0.001 ε = 0.01 ε = 0.05 28.7 0 0.59 2.0 3.2 47.6 0 3.6 30.0 56.8 31.3 0 2.1 11.7 47.3 SZ-Tetris Asterix Asteroids
1702.03118#43
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
44
8 Published as a conference paper at ICLR 2017 0.33% losses in top-1 and top-5 recognition rates, respectively. As for our 2-bit ternary model, although it incurs 2.25% decrease in top-1 error rate and 1.56% decrease in top-5 error rate in comparison to the pre-trained full-precision reference, its accuracy is considerably better than state- of-the-art results reported for binary-weight network (BWN) (Rastegari et al., 2016) and ternary weight network (TWN) (Li & Liu, 2016). Detailed results are summarized in Table 3 and Table 4. Table 3: Our INQ generates extremely low-precision (4-bit and 3-bit) models with improved or very similar accuracy compared with the full-precision ResNet-18 model. Model ResNet-18 ref INQ INQ INQ INQ Bit-width 32 5 4 3 2 (ternary) Top-1 error Top-5 error 31.73% 31.02% 31.11% 31.92% 33.98% 11.31% 10.90% 10.99% 11.64% 12.87% Table 4: Comparison of our 2-bit ternary model and some other binary or ternary models, including the BWN and the TWN approximations of ResNet-18.
1702.03044#44
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
44
rithms that have been used in the Atari 2600 domain have used ε-greedy action selection (one exception is the asynchronous advantage actor-critic method, A3C, which used softmax output units for the actor (Mnih et al., 2016)). One drawback of ε-greedy selection is that it selects all actions with equal probability when exploring, which can lead to poor learning outcomes in tasks where the worst actions have very bad consequences. This is clearly the case in both Tetris games and in the Asterix and Asteroids games. In each state in Tetris, many, and often most, actions will create holes, which are difficult (especially in SZ-Tetris) to remove. In the Asterix game, random exploratory actions can kill Asterix if executed when Cacofonix’s deadly lyres are passing. In the Asteroids game, one of the actions sends the spaceship into hyperspace and makes it reappear in a random location, which has the risk of the spaceship self-destructing or of destroying it by appearing on top of an asteroid. We compared softmax action selection (τ set to the final values) and ε-greedy action selection with ε set to 0,
1702.03118#44
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03118
45
on top of an asteroid. We compared softmax action selection (τ set to the final values) and ε-greedy action selection with ε set to 0, 0.001, 0.01, and 0.05 for the best dSiLU network agent in SZ- Tetris and the best SiLU-dSiLU agents in the Asterix and Asteroids games. The results (see Table 3) clearly show that ε-greedy action selection with ε set to 0.05, as used for evaluation by DQN, is not suitable for these games. The scores were only 4 % to 10 % of the scores for softmax selection. The negative effects of random exploration were largest in Asteroid and SZ-Tetris. Even when ε was set as low as 0.001 and the agent performed only 2.1 exploratory actions per episode in Asteroids and 0.59 in SZ-Tetris, the mean scores were reduced by 30 %
1702.03118#45
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
46
In the literature, recently proposed deep compression method (Han et al., 2016) reports so far best results on network compression without loss of model accuracy. Therefore, the last set of experi- ments is conducted to explore the potential of our INQ for much better deep compression. Note that Han et al. (2016) is a hybrid network compression solution combining three different techniques, namely network pruning (Han et al., 2015), vector quantization (Gong et al., 2014) and Huffman coding. Taking AlexNet as an example, network pruning gets 9× compression, however this re- sult is mainly obtained from the fully connected layers. Actually its compression performance on the convolutional layers is less than 3× (as can be seen in the Table 4 of Han et al. (2016)). Be- sides, network pruning is realized by separately performing pruning and re-training in an iterative way, which is very time-consuming. It will cost at least several weeks for compressing AlexNet. We solved this problem by our dynamic network surgery (DNS) method (Guo et al., 2016) which achieves about 7× speed-up in training and improves the performance of network pruning from 9× to 17.7×. In Han et al.
1702.03044#46
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
46
15 and 20 % (26 % and 22 %), respectively, compared with softmax selection (ε = 0). # 5 Conclusions In this study, we proposed SiLU and dSiLU as activation functions for neural network function approximation in reinforcement learning. We demonstrated in stochastic SZ-Tetris that SiLUs significantly outperformed ReLUs, and that dSiLUs significantly outperformed sigmoid units. The best agent, the dSiLU network agent, achieved new state-of-the-art results in both stochastic SZ-Tetris and 10×10 Tetris. In the Atari 2600 domain, a deep Sarsa(λ) agent with SiLUs in the convolutional layers and dSiLUs in the fully-connected hidden layer outperformed DQN and double DQN, as measured by mean and median DQN normalized scores.
1702.03118#46
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
47
et al., 2016) which achieves about 7× speed-up in training and improves the performance of network pruning from 9× to 17.7×. In Han et al. (2016), after network pruning, vector quantization further improves com- pression ratio from 9× to 27×, and Huffman coding finally boosts compression ratio up to 35×. For fair comparison, we combine our proposed INQ and DNS, and compare the resulting method with Han et al. (2016). Detailed results are summarized in Table 5. When combing our proposed INQ and DNS, we achieve much better compression results compared with Han et al. (2016). Specifi- cally, with 5-bit quantization, we can achieve 53× compression with slightly larger gains both in top-5 and top-1 recognition rates, yielding 51.43%/96.30% absolute improvement in compression performance compared with full version/fair version (i.e., the combination of network pruning and vector quantization) of Han et al. (2016), respectively. Consistently better results have also obtained for our 4-bit and 3-bit models.
1702.03044#47
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
47
An additional purpose of this study was to demonstrate that a more traditional approach of using on-policy learning with eligibility traces and softmax selection (i.e., basically a “text- book” version of a reinforcement learning agent but with non-linear neural network function approximators) can be competitive with the approach used by DQN. This means that there is a lot of room for improvements, by, e.g., using, as DQN, a separate target network, but also by using more recent advances such as the dueling architecture (Wang et al., 2016) for more accurate estimates of the action values and asynchronous learning by multiple agents in parallel (Mnih et al., 2016). # Acknowledgments This work was supported by the project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), JSPS KAKENHI grant 16K12504, and Okinawa Institute of Science and Technology Graduate University research support to KD. # References Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279.
1702.03118#47
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
48
Besides, we also perform a set of experiments on AlexNet to compare the performance of our INQ and vector quantization (Gong et al., 2014). For fair comparison, re-training is also used to enhance the performance of vector quantization, and we set the number of cluster centers for all of 5 convo- lutional layers and 3 fully connect layers to 32 (i.e., 5-bit quantization). In the experiment, vector quantization incurs over 3% loss in model accuracy. When we change the number of cluster centers for convolutional layers from 32 to 128, it gets an accuracy loss of 0.98%. This is consistent with the results reported in (Gong et al., 2014). Comparatively, vector quantization is mainly proposed 9 Published as a conference paper at ICLR 2017 Table 5: Comparison of the combination of our INQ and DNS, and deep compression method on AlexNet. Conv: Convolutional layer, FC: Fully connected layer, P: Pruning, Q: Quantization, H: Huffman coding.
1702.03044#48
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
48
Bertsekas, D. P. and Ioffe, S. (1996). Temporal differences based policy iteration and appli- cations in neuro-dynamic programming. Technical Report LIDS-P-2349, MIT. Burgiel, H. (1997). How to lose at Tetris. Mathematical Gazette, 81:194–200. Elfwing, S., Uchibe, E., and Doya, K. (2015). Expected energy-based restricted boltzmann machine for classification. Neural Networks, 64(3):29–38. 16 Elfwing, S., Uchibe, E., and Doya, K. (2016). From free energy to expected energy: Im- proving energy-based value function approximation in reinforcement learning. Neural Networks, 84:17–27. Fahey, C. (2003). Tetris AI, computer plays tetris. colinfahey.com/tetris/tetris.html [Online; accessed 22-February-2017]. Faußer, S. and Schwenker, F. (2013). Neural network ensembles in reinforcement learning. Neural Processing Letters, pages 1–15.
1702.03118#48
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03118
49
Faußer, S. and Schwenker, F. (2013). Neural network ensembles in reinforcement learning. Neural Processing Letters, pages 1–15. Freund, Y. and Haussler, D. (1992). Unsupervised learning of distributions on binary vectors using two layer networks. In Proceedings of Advances in Neural Information Processing Systems (NIPS1992). Gabillon, V., Ghavamzadeh, M., and Scherrer, B. (2013). Approximate dynamic program- ming finally performs well in the game of tetris. In Proceedings of Advances in Neural Information Processing Systems (NIPS2013), pages 1754–1762. Hahnloser, R. H. R., Sarpeshka, R., Mahowald, M. A., Douglas, R. J., and Seung, H. S. (2000). Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405:947–951. Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 12(8):1771–1800.
1702.03118#49
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
50
to compress the parameters in the fully connected layers of a pre-trained full-precision CNN model, while our INQ addresses all network layers simultaneously and has no accuracy loss for 5-bit and 4-bit quantization. Therefore, it is evident that our INQ is much better than vector quantization. Last but not least, the final weights for vector quantization (Gong et al., 2014), network pruning (Han et al., 2015) and deep compression (Han et al., 2016) are still floating-point values, but the fi- nal weights for our INQ are in the form of either powers of two or zero. The direct advantage of our INQ is that the original floating-point multiplication operations can be replaced by cheaper binary bit shift operations on dedicated hardware like FPGA. # 4 CONCLUSIONS
1702.03044#50
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
50
Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 12(8):1771–1800. Jaskowski, W., Szubert, M. G., Liskowski, P., and Krawiec, K. (2015). High-dimensional a case study in In Proceedings of the Genetic and Evolutionary Computation Conference function approximation for knowledge-free reinforcement learning: SZ-Tetris. (GECCO2015), pages 567–573. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML2016), pages 1928–1937.
1702.03118#50
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
51
In this paper, we present INQ, a new network quantization method, to address the problem of how to convert any pre-trained full-precision (i.e., 32-bit floating-point) CNN model into a lossless low- precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which usually quantize all the network weights simultaneously, INQ is a more compact quantization framework. It incorporates three interdependent operations: weight partition, group- wise quantization and re-training. Weight partition splits the weights in each layer of a pre-trained full-precision CNN model into two disjoint groups which play complementary roles in INQ. The weights in the first group is directly quantized by a variable-length encoding method, forming a low-precision base for the original CNN model. The weights in the other group are re-trained while keeping all the quantized weights fixed, compensating for the accuracy loss from network quantiza- tion. More importantly, the operations of weight partition, group-wise quantization and re-training are repeated on the latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement
1702.03044#51
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
51
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529–533. Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., Maria, A. D., Panneershel- vam, V., Suleyman, M., Beattie, C., Petersen, S., Legg, S., Mnih, V., Kavukcuoglu, K., and Silver, D. (2015). Massively parallel methods for deep reinforcement learning. CoRR, abs/1507.04296. 17 Rummery, G. A. and Niranjan, M. (1994). On-line Q-learning using connectionist sys- tems. Technical Report CUED/F-INFENG/TR 166, Cambridge University Engineering Department.
1702.03118#51
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
52
are repeated on the latest re-trained weight group in an iterative manner until all the weights are quantized, acting as an incremental network quantization and accuracy enhancement procedure. On the ImageNet large scale classification task, we conduct extensive experiments and show that our quantized CNN models with 5-bit, 4-bit, 3-bit and even 2-bit ternary weights have improved or at least comparable accuracy against their full-precision baselines, including AlexNet, VGG-16, GoogleNet and ResNets. As for future works, we plan to extend incremental idea behind INQ from low-precision weights to low-precision activations and low-precision gradients (we have actually already made some good progress on it, as shown in our supplementary materials). We will also investigate computation and power efficiency by implementing our low-precision CNN models on hardware platforms.
1702.03044#52
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
52
Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In International Conference on Learning Representations (ICLR2016). Scherrer, B., Ghavamzadeh, M., Gabillon, V., Lesner, B., and Geist, M. (2015). Approximate modified policy iteration and its application to the game of tetris. Journal of Machine Learning Research, 16:1629–1676. Smolensky, P. (1986). Information processing in dynamical systems: Foundations of harmony theory. In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press. Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine Learning, 3:9–44. Sutton, R. S. (1996). Generalization in reinforcement learning: Successful examples us- ing sparse coarse coding. In Proceedings of Advances in Neural Information Processing Systems (NIPS1996), pages 1038–1044. Sutton, R. S. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press.
1702.03118#52
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
53
# REFERENCES Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and L. Yuille Alan. Se- In ICLR, mantic image segmentation with deep convolutional nets and fully connected crfs. 2015a. Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In ICML, 2015b. 10 Published as a conference paper at ICLR 2017 Matthieu Courbariaux, Bengio Yoshua, and David Jean-Pierre. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830v3, 2016. Ross Girshick. Fast r-cnn. In ICCV, 2015. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep concolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115v1, 2014.
1702.03044#53
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03118
53
Sutton, R. S. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press. Szita, I. and Szepesvári, C. (2010). SZ-Tetris as a benchmark for studying key problems of reinforcement learning. In ICML 2010 workshop on machine learning and games. Tesauro, G. (1994). Td-gammon, a self-teaching backgammon program, achieves master- level play. Neural Computation, 6(2):215–219. Thiery, C. and Scherrer, B. (2009). Improvements on learning tetris with cross entropy. International Computer Games Association Journal, 32. Thrun, S. and Schwartz, A. (1993). Issues in using function approximation for reinforcement learning. In Proceedings of the 1993 Connectionist Models Summer School, pages 255–263. van Hasselt, H. (2010). Double q-learning. In Proceedings of Advances in Neural Information Processing Systems (NIPS2010), pages 2613–2621. van Hasselt, H., Guez, A., and Silver, D. (2015). Deep reinforcement learning with double q-learning. CoRR, abs/1509.06461.
1702.03118#53
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection with simple annealing can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10$\times$10 board, using TD($\lambda$) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa($\lambda$) agent with SiLU and dSiLU hidden units.
http://arxiv.org/pdf/1702.03118
Stefan Elfwing, Eiji Uchibe, Kenji Doya
cs.LG
18 pages, 22 figures; added deep RL results for SZ-Tetris
null
cs.LG
20170210
20171102
[]
1702.03044
54
Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In NIPS, 2016. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, 2015. Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efficient neural networks. In NIPS, 2015. Song Han, Jeff Pool, John Tran, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. Kaiming He, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. Deep residual learning for image recog- nition. In CVPR, 2016. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061v1, 2016.
1702.03044#54
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
55
Alex Krizhevsky, Sutskever Ilya, and E. Hinton Geoffrey. Imagenet classification with deep convo- lutional neural networks. In NIPS, 2012. Yann LeCun, Bottou Leon, Yoshua Bengio, and Patrick Hadner. Gradient-based learning applied to documentrecognition. In NIPS, 1998. Fengfu Li and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711v1, 2016. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279v4, 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
1702.03044#55
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
56
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPS, 2014. Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation from predicting 10,000 classes. In CVPR, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261v1, 2016. 11 Published as a conference paper at ICLR 2017 Yaniv Taigman, Ming Yang, Marc’ Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, 2014.
1702.03044#56
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
57
Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011. Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1605.04711v1, 2016. 12 Published as a conference paper at ICLR 2017 # A APPENDIX 1: STATISTICAL ANALYSIS OF THE QUANTIZED WEIGHTS
1702.03044#57
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
58
Taking our 5-bit AlexNet model as an example, we analyze the distribution of the quantized weights. Detailed statistical results are summarized in Table 6. We can find: (1) in the 1st and 2nd convolu- tional layers, the values of {−2−6, −2−5, −2−4, 2−6, 2−5, 2−4} and {−2−8, −2−7, −2−6, −2−5, 0, 2−8, 2−7, 2−6, 2−5} occupy over 60% and 94% of all quantized weights, respectively; (2) the distributions of the quantized weights in the 3rd, 4th and 5th convolutional layers are similar to that of the 2nd convolutional layer, and more weights are quantized into zero in the 2nd, 3rd, 4th and 5th convolutional layers compared with the 1st convolutional layer; (3) in the 1st fully connected layer, the values of {−2−10, −2−9, −2−8, −2−7, 0, 2−10, 2−9, 2−8, 2−7} occupy about 98% of all quantized weights, and similar results can be seen for the
1702.03044#58
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
59
0, 2−10, 2−9, 2−8, 2−7} occupy about 98% of all quantized weights, and similar results can be seen for the 2nd fully connected layer; (4) gener- ally, the distributions of the quantized weights in the convolutional layers are usually more scattered compared with the fully connected layers. This may be partially the reason why it is much eas- ier to get good compression performance on fully connected layers in comparison to convolutional layers, when using methods such as network hashing (Chen et al., 2015b) and vector quantization (Gong et al., 2014); (5) for 5-bit AlexNet model, the required bit-width for each layer is actually 4 but not 5.
1702.03044#59
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
61
Conv1 - - 5.04% 6.56% 3.43% 9.22% 0.002% 0.004% 0.40% 9.79% 10.52% 8.73% - - 0.55% - 2.70% 9.75% - - 0.004% - 0.39% 4.61% - - - - 0.01% 0.67% 3.62% 6.17% 8.86% 8.97% 11.30% 12.24% 9.70% 5.51% 3.40% 8.30% - 5.81% - - - 10.51% 7.84% - 4.69% - - - 12.91% 11.30% 8.08% 9.70% 5.20% 7.69% 10.44% 8.60% 11.90% 10.94% 11.01% 11.66% 10.33% 8.95% 6.79% 8.95% 12.56% 3.54% 11.05% 11.86% 12.25% 10.67% 1.12% 9.99% 6.37% 0.01% 6.81% 11.15% 6.57% 2.75% 0.06% 1e−5% 2e−5% 0.01% 1.62% 1.24%
1702.03044#61
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.03044
63
B APPENDIX 2: LOSSLESS CNNS WITH LOW-PRECISION WEIGHTS AND LOW-PRECISION ACTIVATIONS Table 7: Comparison of our VGG-16 model with 5-bit weights and 4-bit activations, and the pre- trained reference with 32-bit floating-point weights and 32-bit float-point activations. Bit-width for weight/activation 32/32 5/4 Decrease in top-1/top-5 error Network Top-1 error Top-5 error VGG-16 ref VGG-16 31.46% 29.82% 11.35% 10.19% 1.64%/1.16% 13 Published as a conference paper at ICLR 2017 Recently, we have made some good progress on developing our INQ for lossless CNNs with both low-precision weights and low-precision activations. According to the results summarized in Ta- ble 7, it can be seen that our VGG-16 model with 5-bit weights and 4-bit activations shows improved top-5 and top-1 recognition rates in comparison to the pre-trained reference with 32-bit floating-point weights and 32-bit floating-point activations. To the best of our knowledge, this should be the best results reported on VGG-16 architecture so far. 14
1702.03044#63
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization.
http://arxiv.org/pdf/1702.03044
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, Yurong Chen
cs.CV, cs.AI, cs.NE
Published by ICLR 2017, and the code is available at https://github.com/Zhouaojun/Incremental-Network-Quantization
null
cs.CV
20170210
20170825
[ { "id": "1605.04711" }, { "id": "1602.07261" }, { "id": "1609.07061" }, { "id": "1602.02830" }, { "id": "1603.05279" } ]
1702.01806
1
# Abstract The basic concept in Neural Machine Translation (NMT) is to train a large Neu- ral Network that maximizes the transla- tion performance on a given parallel cor- pus. NMT is then using a simple left-to- right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of ac- tive candidates at each time step. First, this simple search is less adaptive as it also ex- pands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increas- ing the beam size until no performance im- provement can be observed. While you can reach better performance, this has the drawback of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexi- ble beam search strategy whose candidate size may vary at each time step depend- ing on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German→English and Chinese→English without losing any translation quality.
1702.01806#1
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
2
models (Jean et al., 2015; Luong et al., 2015), in the recent it has become very popular years 2013; (Kalchbrenner and Blunsom, Sutskever et al., 2014; Bahdanau et al., 2014). With the recent success of NMT, attention has shifted towards making it more practical. One of the challenges is the search strategy for extracting the best translation for a given source sentence. In NMT, new sentences are translated by a simple beam search decoder that finds a translation that approximately maximizes the conditional proba- bility of a trained NMT model. The beam search strategy generates the translation word by word from left-to-right while keeping a fixed number (beam) of active candidates at each time step. By increasing the beam size, the translation perfor- mance can increase at the expense of significantly reducing the decoder speed. Typically, there is a saturation point at which the translation quality does not improve any more by further increasing the beam. The motivation of this work is two folded. First, we prune the search graph, thus, speed up the decoding process without losing any translation quality. Secondly, we observed that the best scoring candidates often share the same history and often come
1702.01806#2
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
4
# 2 Related Work # 1 Introduction Due to the fact tion (NMT) better performance tional that Neural Machine Transla- is reaching comparable or even tradi- compared to the translation (SMT) statistical machine The original beam search for sequence to se- quence models has been introduced and described by (Graves, 2012; Boulanger-Lewandowski et al., 2013) and by (Sutskever et al., 2014) for neural (Hu et al., 2015; Mi et al., machine translation. 2016) improved the beam search with a constraint softmax function which only considered a limited word set of translation candidates to reduce the computation complexity. This has the advantage that they normalize only a small set of candidates and thus improve the decoding speed. (Wu et al., 2016) only consider tokens that have local scores that are not more than beamsize below the best token during their search. Further, the authors prune all partial hypotheses whose score are beam- size lower than the best final hypothesis (if one has already been generated). In this work, we investigate different absolute and relative pruning schemes which have successfully been applied in statistical machine translation for e.g. phrase table pruning (Zens et al., 2012). # 3 Original Beam Search
1702.01806#4
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
5
# 3 Original Beam Search The original beam-search strategy finds a transla- tion that approximately maximizes the conditional probability given by a specific model. It builds the translation from left-to-right and keeps a fixed number (beam) of translation candidates with the highest log-probability at each time step. For each end-of-sequence symbol that is selected among the highest scoring candidates the beam is reduced by one and the translation is stored into a final can- didate list. When the beam is zero, it stops the search and picks the translation with the highest log-probability (normalized by the number of tar- get words) out of the final candidate list. # 4 Search Strategies In this section, we describe the different strategies we experimented with. In all our extensions, we first reduce the candidate list to the current beam size and apply on top of this one or several of the following pruning schemes. relative threshold pruning method discards those candidates that are far worse than the best active candidate. Given a pruning threshold rp and an active candidate list C, a candidate cand ∈ C is discarded if: score(cand) ≤ rp ∗ max c∈C {score(c)} (1)
1702.01806#5
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
6
score(cand) ≤ rp ∗ max c∈C {score(c)} (1) Absolute Threshold Pruning. Instead of taking the relative difference of the scores into ac- count, we just discard those candidates that are worse by a specific threshold than the best active candidate. Given a pruning threshold ap and an active candidate list C, a candidate cand ∈ C is discarded if: score(cand) ≤ max c∈C {score(c)} − ap (2) Relative Local Threshold Pruning. In this prun- ing approach, we only consider the score scorew of the last generated word and not the total score which also include the scores of the previously generated words. Given a pruning threshold rpl and an active candidate list C, a candidate cand ∈ C is discarded if: scorew(cand) ≤ rpl ∗ max c∈C {scorew(c)} (3) Maximum Candidates per Node We observed that at each time step during the decoding process, most of the partial hypotheses share the same predecessor words. To introduce more diversity, we allow only a fixed number of candidates with the same history at each time step. Given a maximum candidate threshold mc and an active candidate list C, a candidate cand ∈ C is discarded if already mc better scoring partial hyps with the same history are in the candidate list. # 5 Experiments
1702.01806#6
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
8
In all our experiments, we use our in-house attention-based NMT implementation which is For similar German→English, we use sub-word units ex- tracted by byte pair encoding (Sennrich et al., 2015) instead of words which shrinks the vocabu- lary to 40k sub-word symbols for both source and target. For Chinese→English, we limit our vocab- ularies to be the top 300K most frequent words for both source and target language. Words not in these vocabularies are converted into an unknown token. During translation, we use the alignments (from the attention mechanism) to replace the un- known tokens either with potential targets (ob- tained from an IBM Model-1 trained on the paral- lel data) or with the source word itself (if no target was found) (Mi et al., 2016). We use an embed- ding dimension of 620 and fix the RNN GRU lay28 25 27.5 20 27 15 U E L B 26.5 10 26 5 25.5 BLEU average fan out 0 5 10 15 beam size 20 25 e c n e t n e s r e p t u o n a f e g a r e v a
1702.01806#8
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
9
Figure 1: German→English: Original beam- search strategy with different beam sizes on new- stest2014. 27.4 5 27.2 4.5 27 26.8 4 26.6 3.5 U E L B 26.4 3 26.2 2.5 26 25.8 BLEU average fan out 2 25.6 1.5 25.4 1 0 0.2 0.4 0.6 0.8 1 relative pruning, beam size = 5 e c n e t n e s r e p t u o n a f e g a r e v a Figure 2: German→English: Different values of relative pruning measured on newstest2014. ers to be of 1000 cells each. For the training proce- dure, we use SGD (Bishop, 1995) to update model parameters with a mini-batch size of 64. The train- ing data is shuffled after each epoch.
1702.01806#9
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
10
We measure the decoding speed by two num- bers. First, we compare the actual speed relative to the same setup without any pruning. Secondly, we measure the average fan out per time step. For each time step, the fan out is defined as the num- ber of candidates we expand. Fan out has an up- per bound of the size of the beam, but can be de- creased either due to early stopping (we reduce the beam every time we predict a end-of-sentence symbol) or by the proposed pruning schemes. For each pruning technique, we run the experiments with different pruning thresholds and chose the largest threshold that did not degrade the transla- tion performance based on a selection set. In Figure 1, you can see the German→English translation performance and the average fan out per sentence for different beam sizes. Based
1702.01806#10
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
11
In Figure 1, you can see the German→English translation performance and the average fan out per sentence for different beam sizes. Based on this experiment, we decided to run our prun- ing experiments for beam size 5 and 14. The German→English results can be found in Table 1. By using the combination of all pruning tech- niques, we can speed up the decoding process by 13% for beam size 5 and by 43% for beam size 14 without any drop in performance. The rela- tive pruning technique is the best working one for beam size 5 whereas the absolute pruning tech- nique works best for a beam size 14. In Figure 2 the decoding speed with different relative prun- ing threshold for beam size 5 are illustrated. Set- ting the threshold higher than 0.6 hurts the trans- lation performance. A nice side effect is that it has become possible to decode without any fix beam size when we apply pruning. Nevertheless, the de- coding speed drops while the translation perfor- mance did not change. Further, we looked at the number of search errors introduced by our prun- ing schemes (number of times we prune the best scoring hypothesis). 5% of the sentences change due to search errors for beam size 5 and 9% of the sentences change for beam size 14 when using all four pruning techniques together.
1702.01806#11
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
12
The Chinese→English translation results can be found in Table 2. We can speed up the decoding process by 10% for beam size 5 and by 24% for beam size 14 without loss in translation quality. In addition, we measured the number of search errors introduced by pruning the search. Only 4% of the sentences change for beam size 5, whereas 22% of the sentences change for beam size 14. # 6 Conclusion The original beam search decoder used in Neu- ral Machine Translation is very simple. It gen- erated translations from left-to-right while look- ing at a fix number (beam) of candidates from the last time step only. By setting the beam size large enough, we ensure that the best translation per- formance can be reached with the drawback that many candidates whose scores are far away from the best are also explored. In this paper, we in- troduced several pruning techniques which prune candidates whose scores are far away from the best one. By applying a combination of absolute and relative pruning schemes, we speed up the decoder by up to 43% without losing any translation qual- ity. Putting more diversity into the decoder did not improve the translation quality.
1702.01806#12
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
13
beam speed avg fan out tot fan out newstest2014 newstest2015 per sent BLEU TER BLEU TER size 55.4 1 53.7 5 53.8 5 53.7 5 53.8 5 53.8 5 53.8 5 53.5 14 53.4 14 53.5 14 53.4 14 53.4 14 53.4 14 53.3 - pruning per sent 1.00 4.54 3.71 4.11 4.25 4.54 3.64 12.19 10.38 9.49 10.27 12.21 8.44 28.46 up - - 6% 5% 5% 0% 13% - 10% 29% 24% 1% 43% - 25 122 109 116 118 126 101 363 315 279 306 347 260 979 56.8 54.6 54.7 54.6 54.7 54.6 54.6 54.3 54.3 54.3 54.4 54.4 54.5 54.4 25.5 27.3 27.3 27.3 27.3 27.4 27.3 27.6 27.6 27.6 27.6 27.6 27.6 27.6 26.1 27.4 27.3 27.4 27.4 27.5 27.3 27.6 27.6 27.6 27.7 27.7 27.6 27.6 no pruning no
1702.01806#13
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
16
pruning beam speed avg fan out tot fan out MT08 nw MT08 wb per sent BLEU TER BLEU TER size 27.3 61.7 26.0 60.3 1 34.4 57.3 30.6 58.2 5 34.4 57.3 30.6 58.2 5 34.3 57.3 30.6 58.2 5 34.4 57.5 30.6 58.3 5 34.4 57.4 30.7 58.2 5 34.3 57.3 30.6 58.2 5 35.3 57.1 31.2 57.8 14 35.2 57.2 31.2 57.8 14 35.2 56.9 31.1 57.9 14 35.3 57.2 31.1 57.9 14 35.3 56.9 31.1 57.8 14 35.3 56.9 31.1 57.8 14 35.2 57.3 31.1 57.9 - up - - 1% 4% 1% 0% 10% - 3% 14% 10% 0% 24% - per sent 1.00 4.36 4.32 4.26 4.35 4.37 3.92 11.96 11.62 10.15 10.93 11.98 8.62 38.76 29 137 134 132 135 139 121 376 362 321 334 378 306 1411 no pruning no pruning
1702.01806#16
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
18
Table 2: Results Chinese→English: relative pruning(rp), absolute pruning(ap), relative local pruning(rpl) and maximum candidates per node(mc). # References D. Bahdanau, K. Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. ArXiv e-prints . Christopher M Bishop. 1995. Neural networks for pat- tern recognition. Oxford university press. 2016 conference on machine translation (wmt16). Proceedings of WMT . Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2013. Audio chord recognition with recurrent neural networks. In ISMIR. Citeseer, pages 335–340. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016. Findings of the Alex Graves. 2012. Sequence transduction with arXiv preprint recurrent neural networks. arXiv:1211.3711 . Xiaoguang Hu, Wei Li, Xiang Lan, Hua Wu, and Haifeng Wang. 2015. Improved beam search with constrained softmax for nmt. Proceedings of MT Summit XV page 297.
1702.01806#18
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
19
Haifeng Wang. 2015. Improved beam search with constrained softmax for nmt. Proceedings of MT Summit XV page 297. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- In get vocabulary for neural machine translation. Proceedings of ACL. Beijing, China, pages 1–10. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent In Proceedings of continuous translation models. the 2013 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Seattle. Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of ACL. Beijing, China, pages 11–19. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine trans- lation. arXiv preprint arXiv:1605.03209 . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 .
1702.01806#19
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1702.01806
20
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada. pages 3104–3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Google’s neural ma- Macherey, et al. 2016. chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144 . Richard Zens, Daisy Stanton, and Peng Xu. 2012. A systematic comparison of phrase table pruning tech- In Proceedings of the 2012 Joint Confer- niques. ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguis- tics, pages 972–983.
1702.01806#20
Beam Search Strategies for Neural Machine Translation
The basic concept in Neural Machine Translation (NMT) is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is then using a simple left-to-right beam-search decoder to generate new translations that approximately maximize the trained conditional probability. The current beam search strategy generates the target sentence word by word from left-to- right while keeping a fixed amount of active candidates at each time step. First, this simple search is less adaptive as it also expands candidates whose scores are much worse than the current best. Secondly, it does not expand hypotheses if they are not within the best scoring candidates, even if their scores are close to the best one. The latter one can be avoided by increasing the beam size until no performance improvement can be observed. While you can reach better performance, this has the draw- back of a slower decoding speed. In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores. We speed up the original decoder by up to 43% for the two language pairs German-English and Chinese-English without losing any translation quality.
http://arxiv.org/pdf/1702.01806
Markus Freitag, Yaser Al-Onaizan
cs.CL
First Workshop on Neural Machine Translation, 2017
Proceedings of the First Workshop on Neural Machine Translation, 2017
cs.CL
20170206
20170614
[ { "id": "1605.03209" }, { "id": "1508.07909" }, { "id": "1609.08144" } ]
1701.08718
0
7 1 0 2 n a J 0 3 ] G L . s c [ 1 v 8 1 7 8 0 . 1 0 7 1 : v i X r a Memory Augmented Neural Networks with Wormhole Connections # Memory Augmented Neural Networks with Wormhole Connections # Caglar Gulcehre Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Sarath Chandar Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Yoshua Bengio Montreal Institute for Learning Algorithms Universite de Montreal Montreal, Canada [email protected] # Abstract
1701.08718#0
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
1
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long
1701.08718#1
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
2
are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
1701.08718#2
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
4
Memory (LSTM) units (Hochreiter and Schmidhuber, 1997) were proposed as an alternative architecture which can handle long range dependencies better than a vanilla RNN. A simplified version of LSTM unit called Gated Recurrent Unit (GRU), proposed in (Cho et al., 2014), has proven to be successful in a number of applications (Bahdanau et al., 2015; Xu et al., 2015; Trischler et al., 2016; Kaiser and Sutskever, 2015; Serban et al., 2016). Even though LSTMs and GRUs attempt to solve the vanishing gradient problem, the memory in both architectures is stored in a single hidden vector as it is done in an RNN and hence accessing the information too far in the past can still be difficult. In other words, LSTM and GRU models have a limited ability to perform a search through its past memories when it needs to access a relevant information for making a prediction. Extending the capabilities of neural networks with a memory component has been explored in the literature on different applications with different architectures (Weston et al., 2015; Graves et al., 2014; Joulin and Mikolov,
1701.08718#4
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
6
Memory augmented neural networks (MANN) such as neural Turing machines (NTM) (Graves et al., 2014; Rae et al., 2016), dynamic NTM (D-NTM) (Gulcehre et al., 2016), and Differentiable Neural Computers (DNC) (Graves et al., 2016) use an external memory (usually a matrix) to store information and the MANN’s controller can learn to both read from and write into the external memory. As we show here, it is in general possible to use particular MANNs to explicitly store the previous hidden states of an RNN in the memory and that will provide shortcut connections through time, called here wormhole connections, to look into the history of the states of the RNN controller. Learning to read and write into an external memory by using neural networks gives the model more freedom or flexibility to retrieve information from its past, forget or store new information into the memory. However, if the addressing mechanism for read and/or write operations are continuous (like in the NTM and continuous D-NTM), then the access may be too diffuse, especially early on during training. This can hurt especially the writing operation, since a diffused write
1701.08718#6
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
7
D-NTM), then the access may be too diffuse, especially early on during training. This can hurt especially the writing operation, since a diffused write operation will overwrite a large fraction of the memory at each step, yielding fast vanishing of the memories (and gradients). On the other hand, discrete addressing, as used in the discrete D-NTM, should be able to perform this search through the past, but prevents us from using straight backpropagation for learning how to choose the address.
1701.08718#7
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
8
We investigate the flow of the gradients and how the wormhole connections introduced by the controller effects it. Our results show that the wormhole connections created by the controller of the MANN can significantly reduce the effects of the vanishing gradients by shortening the paths that the signal needs to travel between the dependencies. We also discuss how the MANNs can generalize to the sequences longer than the ones seen during the training. In a discrete D-NTM, the controller must learn to read from and write into the external memory by itself and additionally, it should also learn the reader/writer synchronization. This can make the learning to be more challenging. In spite of this difficulty, Gulcehre et al. (2016) reported that the discrete D-NTM can learn faster than the continuous D-NTM on some of the bAbI tasks. We provide a formal analysis of gradient flow in MANNs based on discrete addressing and justify this result. In this paper, we also propose a new MANN based on discrete addressing called TARDIS (Temporal Automatic Relation Discovery in Sequences). In TARDIS, memory access is based on tying the write and read heads of 2 # Memory Augmented Neural Networks with Wormhole Connections
1701.08718#8
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
9
2 # Memory Augmented Neural Networks with Wormhole Connections the model after memory is filled up. When the memory is not full, the write head store information in memory in the sequential order. The main characteristics of TARDIS are as follows, TARDIS is a simple memory aug- mented neural network model which can represent long-term dependencies efficiently by using a external memory of small size. TARDIS represents the dependencies between the hidden states inside the memory. We show both theoretically and experimentally that TARDIS fixes to a large extent the problems related to long-term dependencies. Our model can also store sub-sequences or sequence chunks into the memory. As a consequence, the controller can learn to represent the high-level temporal abstractions as well. TARDIS performs well on several structured output prediction tasks as verified in our experiments. The idea of using external memory with attention can be justified with the concept of mental-time travel which humans do occasionally to solve daily tasks. In particular, in the cognitive science literature, the concept of chronesthesia is known to be a form of consciousness which allows human to think about time subjectively and perform mental time-travel (Tulving, 2002). TARDIS is inspired by this ability of humans which allows one to look up past memories and plan for the future using the episodic memory.
1701.08718#9
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
10
# 2. TARDIS: A Memory Augmented Neural Network Neural network architectures with an external memory represent the memory in a matrix form, such that at each time step t the model can both read from and write to the external memory. The whole content of the external memory can be considered as a generalization of hidden state vector in a recurrent neural network. Instead of storing all the information into a single hidden state vector, our model can store them in a matrix which has a higher capacity and with more targeted ability to substantially change or use only a small subset of the memory at each time step. The neural Turing machine (NTM) (Graves et al., 2014) is such an example of a MANN, with both reading and writing into the memory. # 2.1 Model Outline
1701.08718#10
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
11
# 2.1 Model Outline In this subsection, we describe the basic structure of TARDIS 1 (Temporal Automatic Relation Discovery In Sequences). TARDIS is a MANN which has an external memory matrix Mt ∈ Rk×q where k is the number of memory cells and q is the dimensionality of each cell. The model has an RNN controller which can read and write from the external memory at every time step. To read from the memory, the controller generates the read t ∈ Rk×1 and the reading operation is typically achieved by computing the dot weights wr product between the read weights wr t and the memory Mt, resulting in the content vector rt ∈ Rq×1: r= (M;)' wi, (1 TARDIS uses discrete addressing and hence wr t is a one-hot vector and the dot-product chooses one of the cells in the memory matrix (Zaremba and Sutskever, 2015; Gulcehre et al., t ∈ R1×k, to write into the memory which 2016). The controller generates the write weights ww is also a one hot vector, with discrete addressing. We will omit biases from our equations 1. Name of the model is inspired from the time-machine in a popular TV series Dr. Who. 3 # Gulcehre, Chandar, and Bengio
1701.08718#11
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
12
1. Name of the model is inspired from the time-machine in a popular TV series Dr. Who. 3 # Gulcehre, Chandar, and Bengio for the simplicity in the rest of the paper. Let i be the index of the non-zero entry in the one-hot vector ww t , then the controller writes a linear projection of the current hidden state to the memory location Mt[i]: Mt[i] = Wmht, (2) where Wm ∈ Rdm×dh is the projection matrix that projects the dh dimensional hidden state vector to a dm dimensional micro-state vector such that dh > dm. At every time step, the hidden state ht of the controller is also conditioned on the content rt read from the memory. The wormhole connections are created by conditioning ht on rt: ht = φ(xt, ht−1, rt). (3) As each cell in the memory is a linear projection of one of the previous hidden states, the conditioning of the controller’s hidden state with the content read from the memory can be interpreted as a way of creating short-cut connections across time (from the time t’ tha hy was written to the time ¢ when it was read through r;) which can help to the flow of gradients across time. This is possible because of the discrete addressing used for read and write operations.
1701.08718#12
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
13
However, the main challenge for the model is to learn proper read and write mechanisms so that it can write the hidden states of the previous time steps that will be useful for future predictions and read them at the right time step. We call this the reader/writer synchronization problem. Instead of designing complicated addressing mechanisms to mitigate the difficulty of learning how to properly address the external memory, TARDIS side-steps the reader/writer synchronization problem by using the following heuristics. For the first k time steps, our model writes the micro-states into the k cells of the memory in a sequential order. When the memory becomes full, the most effective strategy in terms of preserving the information stored in the memory would be to replace the memory cell that has been read with the micro-state generated from the hidden state of the controller after it is conditioned on the memory cell that has been read. If the model needs to perfectly retain the memory cell that it has just overwritten, the controller can in principle learn to do that by copying its read input to its write output (into the same memory cell). The pseudocode and the details of the memory update algorithm for TARDIS is presented in Algorithm 1. There are two missing pieces in Algorithm 1: How to generate the read weights? What is the structure of the controller function φ? We will answer these two questions in detail in next two sub-sections.
1701.08718#13
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
14
2.2 Addressing mechanism Similar to D-NTM, memory matrix Mt of TARDIS has disjoint address section At ∈ Rk×a and content section Ct ∈ Rk×c, Mt = [At; Ct] and Mt ∈ Rk×q for q = c + a. However, unlike D-NTM address vectors are fixed to random sparse vectors. The controller reads both the address and the content parts of the memory, but it will only write into the content section of the memory. t are generated by an MLP which uses the information coming from ht, xt, Mt and the usage vector ut (described below). The MLP is parametrized as follows: 4 # Memory Augmented Neural Networks with Wormhole Connections Algorithm 1 Pseudocode for the controller and memory update mechanism of TARDIS.
1701.08718#14
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
15
4 # Memory Augmented Neural Networks with Wormhole Connections Algorithm 1 Pseudocode for the controller and memory update mechanism of TARDIS. Initialize ho Initialize Mo for t € {1,---T,} do Compute the read weights w} < read(hy, Mz, xz) Sample from/discretize W; and obtain wf Read from the memory, rz < (Mi)! wf. Compute a new controller hidden state, hy <— 6(x:, he—1,1r2) if t<kthen Wr else ite into the memory, M;[¢] < Wmhr Select the memory location to write into j + max,(w/[j]) Wr ite into the memory, M;[j] — Wmhr end if end for m[i] = a! tanh(W]h; + Wix, + W),M, [i] + W2u,) (4) W) = softmax(7;), (5) x, Wγ by either sampling from wr where {a, Wγ h, Wγ m, Wγ t or by using argmax over wr t . u} are learnable parameters. wr t is a one-hot vector obtained
1701.08718#15
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
16
ut is the usage vector which denotes the frequency of accesses to each cell in the memory. ut is computed from the sum of discrete address vectors wr t and normalizing them. t-1 w= norm(}~ wi). (6) i=1 norm(·) applied in Equation 6 is a simple feature-wise computation of centering and divisive variance normalization. This normalization step makes the training easier with the usage vectors. The introduction of the usage vector can help the attention mechanism to choose between the different memory cells based on their frequency of accesses to each cell of the memory. For example, if a memory cell is very rarely accessed by the controller, for the next time step, it can learn to assign more weights to those memory cells by looking into the usage vector. By this way, the controller can learn an LRU access mechanism (Santoro et al., 2016; Gulcehre et al., 2016). Further, in order to prevent the model to learn deficient addressing mechanisms, for e.g. reading the same memory cell which will not increase the memory capacity of the model, we decrease the probability of the last read memory location by subtracting 100 from the logit of wr 5 Gulcehre, Chandar, and Bengio # 2.3 TARDIS Controller
1701.08718#16
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
17
5 Gulcehre, Chandar, and Bengio # 2.3 TARDIS Controller We use an LSTM controller, and its gates are modified to take into account the content rt of the cell read from the memory: ft it ot = sigm sigm sigm (Whht−1 + Wxxt + Wrrt) , (7) where ft, it, and ot are forget gate, input gate, and output gate respectively. αt, βt are the scalar RESET gates which control the magnitude of the information flowing from the memory and the previous hidden states to the cell of the LSTM ct. By controlling the flow of information into the LSTM cell, those gates will allow the model to store the sub-sequences or chunks of sequences into the memory instead of the entire context. We use Gumbel sigmoid (Maddison et al., 2016; Jang et al., 2016) for αt and βt due to its behavior close to binary. a,\ _ (gumbel-sigmoid wel wel wel (“') ~ ees we hia + wel xt wet Pe) (8)
1701.08718#17
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
18
a,\ _ (gumbel-sigmoid wel wel wel (“') ~ ees we hia + wel xt wet Pe) (8) As in Equation 8 empirically, we find gumbel-sigmoid to be easier to train than the regular sigmoid. The temperature of the Gumbel-sigmoid is fixed to 0.3 in all our experiments. The cell of the LSTM controller, ct is computed according to the Equation 9 with the αt and βt RESET gates. ˜ct = tanh(βtWg ct = ftct−1 + it˜ct, hht−1 + Wg xxt + αtWg rrt), The hidden state of the LSTM controller is computed as follows: ht = ot tanh(ct). (10) In Figure 1, we illustrate the interaction between the controller and the memory with various heads and components of the controller. # 2.4 Micro-states and Long-term Dependencies A micro-state of the LSTM for a particular time step is the summary of the information that has been stored in the LSTM controller of the model. By attending over the cells of the memory which contains previous micro-states of the LSTM, the model can explicitly learn to restore information from its own past.
1701.08718#18
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
19
The controller can learn to represent high-level temporal abstractions by creating wormhole connections through the memory as illustrated in Figure 2. In this example, the model takes the token x0 at the first timestep and stores its representation to the first memory cell with address a0. In the second timestep, the controller takes x1 as input and writes into the second memory cell with the address a1. Furthermore, β1 gater blocks the connection from h1 to h2. At the third timestep, the controller starts reading. It receives x2 as input and 6 (9) # Memory Augmented Neural Networks with Wormhole Connections ‘ Legend: : MLP output : Read/Write output 7 ie) @) : Observed Input ° iS) : Output prediction : Controller — —p : General Connection — —@ : Multiplicative Connection —> : Affine Connection
1701.08718#19
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
20
Figure 1: At each time step controller takes xt, the memory cell that has been read rt and the hidden state of the previous timestep ht−1. Then, it generates αt which controls the contribution of the rt into the internal dynamics of the new controller’s state ht (We omit the βt in this visualization). Once the memory Mt becomes full, discrete addressing weights wr t is generated by the controller which will be used to both read from and write into the memory. To the predict the target yt, the model will have to use both ht and rt. reads the first memory cell where micro-state of h0 was stored. After reading, it computes the hidden-state h2 and writes the micro-state of h2 into the first memory cell. The length of the path passing through the microstates of h0 and h2 would be 1. The wormhole connection from h2 to h0 would skip a timestep.
1701.08718#20
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
21
A regular single-layer RNN has a fixed graphical representation of a linear-chain when considering only the connections through its recurrent states or the temporal axis. However, TARDIS is more flexible in terms of that and it can learn directed graphs with more diverse structures using the wormhole connections and the RESET gates. The directed graph that TARDIS can learn through its recurrent states have at most the degree of 4 at each vertex (maximum 2 incoming and 2 outgoing edges) and it depends on the number of cells (k) that can be stored in the memory. In this work, we focus on a variation of TARDIS, where the controller maintains a fixed-size external memory. However as in (Cheng et al., 2016), it is possible to use a memory that grows with respect to the length of its input sequences, but that would not scale and can be more difficult to train with discrete addressing. 7 # Gulcehre, Chandar, and Bengio GULCEHRE, CHANDAR, AND BENGIO
1701.08718#21
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
22
7 # Gulcehre, Chandar, and Bengio GULCEHRE, CHANDAR, AND BENGIO M -- wo M, _ M3 Mg ___ Ms eo 0 Le, to 19 [fg hy» By ray ay He ray are Read ag Read ay Read on . [Write fig to ay Write hy to ay ‘\ , [Writeh, toay =~ (Write hg to ay Dependencies among the input tokens: a KAR KN oe + 2S “4 Figure 2: TARDIS’s controller can learn to represent the dependencies among the inputs tokens by choosing which cells to read and write and creating wormhole connections. xt represents the input to the controller at timestep t and the ht is the hidden state of the controller RNN. # 3. Training TARDIS In this section, we explain how to train TARDIS as a language model. We use language modeling as an example application. However, we would like to highlight that TARDIS can also be applied to any complex sequence to sequence learning tasks.
1701.08718#22
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
23
Consider N training examples where each example is a sequence of length T . At every time-step t, the model receives the input xt ∈ {0, 1}|V | which is a one-hot vector of size equal to the size of the vocabulary |V | and should produce the output yt ∈ {0, 1}|V | which is also a one-hot vector of size equal to the size of the vocabulary |V |. The output of the model for i-th example and t-th time-step is computed as follows: t = softmax(Wog(h(i) oi t , r(i) t )), (11) where Wo is the learnable parameters and g(ht, rt) is a single layer MLP which combines both ht and rt as in deep fusion by (Pascanu et al., 2013a). The task loss would be the categorical cross-entropy between the targets and model-outputs. Super-script i denotes that the variable is the output for the ith sample in the training set. N T \V| ; Lmodei(O = wh » sy! [k] log(of" [&]), (12) i=1 t=1 k=1
1701.08718#23
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
24
N T \V| ; Lmodei(O = wh » sy! [k] log(of" [&]), (12) i=1 t=1 k=1 However, the discrete decisions taken for memory access during every time-step makes the model not differentiable and hence we need to rely on approximate methods of computing gradients with respect to the discrete address vectors. In this paper we explore two such approaches: REINFORCE (Williams, 1992) and straight-through estimator (Bengio et al., 2013). 8 Memory Augmented Neural Networks with Wormhole Connections # 3.1 Using REINFORCE REINFORCE is a likelihood-ratio method, which provides a convenient and simple way of estimating the gradients of the stochastic actions. In this paper, we focus on application of REINFORCE on sequential prediction tasks, such as language modelling. For example i, let R(wr(i) at timestep j. We are interested in maximizing j the expected return for the whole episode as defined below: T(8) = ES) R(w!)) (13)
1701.08718#24
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
25
T(8) = ES) R(w!)) (13) Ideally we would like to compute the gradients for Equation 13, however computing the gradient of the expectation may not be feasible. We would have to use a Monte-Carlo approximation and compute the gradients by using the REINFORCE for the sequential prediction task which can be written as in Equation 14. N T ; ; Vo (8) = 5 1S (ROw)) =) Vo loutwe”)), (14) where bj is the reward baseline. However, we can further assume that the future actions do not depend on the past rewards in the episode/trajectory and further reduce the variance of REINFORCE as in Equation 15. N T T ; ; Vo (9) = = S-1S> S\(R(w)) = b;) Vo log we], (15) i=1 t=0 j=t 2 In our preliminary experiments, we find out that the training of the model is easier with the discounted returns, instead of using the centered undiscounted return: 1 N T T ri) r(i) Vor(6 =yL Lr w') — b;)|Vo log(w;"”)]. (16) i=1 t=0 j=t
1701.08718#25
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
26
Training REINFORCE with an Auxiliary Cost Training models with REINFORCE can be difficult, due to the variance imposed into the gradients. In the recent years, researchers have developed several tricks in order to mitigate the effect of high-variance in the gradients. As proposed by (Mnih and Gregor, 2014), we also use variance normalization on the REINFORCE gradients. )) is the log-likelihood of the prediction at j that timestep. Our initial experiments showed that REINFORCE with this reward structue often tends to under-utilize the memory and mainly rely on the internal memory of the LSTM controller. Especially, in the beginning of the training model, it can just decrease the loss by relying on the memory of the controller and this can cause the REINFORCE to increase the log-likelihood of the random actions. In order to deal with this issue, instead of using the log-likelihood of the model as reward, we introduce an auxiliary cost to use as the reward R’ which is computed based on predictions 9 # Gulcehre, Chandar, and Bengio which are only based on the memory cell rt which is read by the controller and not the hidden state of the controller:
1701.08718#26
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
27
9 # Gulcehre, Chandar, and Bengio which are only based on the memory cell rt which is read by the controller and not the hidden state of the controller: ial wi) =r" k] log(softmax(W?r; ©) 4 wex\)) [ak], (17) x ∈ Rdo×dx} where do is the dimensionality of the output size and dx (for language modelling both do and dx would be do = |V |) is the dimensionality of the input of the model. We do not backpropagate through r(j) # i # i # 3.2 Using Gumbel Softmax Training with REINFORCE can be challenging due to the high variance of the gradients, gumbel-softmax provides a good alternative with straight-through estimator for REINFORCE to tackle the variance issue. Unlike (Maddison et al., 2016; Jang et al., 2016) instead of annealing the temperature or fixing it, our model learns the inverse-temperature with an MLP τ (ht) which has a single scalar output conditioned on the hidden state of the controller. r(h;) = softplus(w7 "bh; +b’) + 1. gumbel-softmax(7;[i]) = softmax((m[i] + €)7(hz)),
1701.08718#27
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
28
(18) (19) We replace the softmax in Equation 5 with gumbel-softmax defined above. During t for t for gradient computation and hence Learning the temperature of the Gumbel-Softmax reduces the burden of performing extensive hyper-parameter search for the temperature. # 4. Related Work Neural Turing Machine (NTM) (Graves et al., 2014) is the most related class of architecture to our model. NTMs have proven to be successful in terms of generalizing over longer sequences than the sequences that it has been trained on. Also NTM has been shown to be more effective in terms of solving algorithmic tasks than the gated models such as LSTMs. However NTM can have limitations due to some of its design choices. Due to the controller’s lack of precise knowledge on the contents of the information, the contents of the memory can overlap. These memory augmented models are also known to be complicated, which yields to the difficulties in terms of implementing the model and training it. The controller has no information about the sequence of operations and the information such as frequency of the read and write access to the memory. TARDIS tries to address these issues.
1701.08718#28
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
29
Gulcehre et al. (2016) proposed a variant of NTM called dynamic NTM (D-NTM) which had learnable location based addressing. D-NTM can be used with both continuous addressing and discrete addressing. Discrete D-NTM is related to TARDIS in the sense that both models use discrete addressing for all the memory operations. However, discrete D-NTM expects the controller to learn to read/write and also learn reader/writer synchronization. 10 # Memory Augmented Neural Networks with Wormhole Connections TARDIS do not have this synchronization problem since both reader and writer are tied. Rae et al. (2016) proposed sparse access memory (SAM) mechanism for NTMs which can be seen as a hybrid of continuous and discrete addressing. SAM uses continuous addressing over a selected set of top-K relevant memory cells. Recently, Graves et al. (2016) proposed a differentiable neural computer (DNC) which is a successor of NTM. Rocktäschel et al. (2015) and (Cheng et al., 2016) proposed models that generate weights to attend over the previous hidden states of the RNN. However, since those models attend over the whole context, the computation of the attention can be inefficient.
1701.08718#29
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
30
Grefenstette et al. (2015) has proposed a model that can store the information in a data structure, such as in a stack, dequeue or queue in a differentiable manner. Grave et al. (2016) has proposed to use a cache based memory representation which stores the last k states of the RNN in the memory and similar to the traditional cache-based models the model learns to choose a state of the memory for the prediction in the language modeling tasks (Kuhn and De Mori, 1990). # 5. Gradient Flow through the External Memory In this section, we analyze the flow of the gradients through the external memory and will also investigate its efficiency in terms of dealing with the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). First, we describe the vanishing gradient problem in an RNN and then describe how an external memory model can deal with it. For the sake of simplicity, we will focus on vanilla RNNs during the entire analysis, but the same analysis can be extended to LSTMs. In our analysis, we also assume that the weights for the read/write heads are discrete. We will show that the rate of the gradients vanishing through time for a memory- augmented recurrent neural network is much smaller than of a regular vanilla recurrent neural network.
1701.08718#30
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]
1701.08718
31
We will show that the rate of the gradients vanishing through time for a memory- augmented recurrent neural network is much smaller than of a regular vanilla recurrent neural network. Consider an RNN which at each timestep t takes an input xt ∈ Rd and produces an output yt ∈ Ro. The hidden state of the RNN can be written as, # zt = Wht−1 + Uxt, ht = f(zt). (20) (21) where W and U are the recurrent and the input weights of the RNN respectively and f(-) is a non-linear activation function. Let £ = an L; be the loss function that the RNN is trying to minimize. Given an input sequence of length T’, we can write the derivative of the loss £ with respect to parameters @ as, aL aL ALy, Dhy, Thy, 30 = 2s 00 DS oh, db, 00 (22) 1st <T 1s<ti ST 1<to<ti 11 # Gulcehre, Chandar, and Bengio The multiplication of many Jacobians in the form of ∂ht ∂ht−1 to obtain ∂ht1 ∂ht0 is the main reason of the vanishing and the exploding gradients (Pascanu et al., 2013b):
1701.08718#31
Memory Augmented Neural Networks with Wormhole Connections
Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.
http://arxiv.org/pdf/1701.08718
Caglar Gulcehre, Sarath Chandar, Yoshua Bengio
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20170130
20170130
[ { "id": "1609.01704" }, { "id": "1603.09025" }, { "id": "1606.01305" }, { "id": "1503.08895" }, { "id": "1607.06450" }, { "id": "1605.07427" }, { "id": "1607.00036" }, { "id": "1609.06038" }, { "id": "1511.08228" }, { "id": "1611.01144" }, { "id": "1507.06630" }, { "id": "1603.05118" }, { "id": "1601.06733" }, { "id": "1609.09106" }, { "id": "1509.06664" }, { "id": "1506.02075" }, { "id": "1612.04426" }, { "id": "1607.03474" }, { "id": "1605.06065" }, { "id": "1606.02270" }, { "id": "1611.03068" }, { "id": "1611.00712" }, { "id": "1508.05326" } ]