doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1607.00036 | 32 | Table 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU and feedforward controller. FF stands for the experiments that are conducted with feedforward controller. Let us, note that LBAâ refers to NTM that uses both LBA and CBA. In this table, we compare multi-step vs single-step address- ing, original NTM with location based+content based addressing vs only content based addressing, and discrete vs continuous addressing on bAbI.
it to help with tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTM over the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning. | 1607.00036#32 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 33 | Among the recurrent variants of the proposed D-NTM, we notice signiï¬cant im- provements by using discrete addressing over using continuous addressing. We con- jecture that this is due to certain types of tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressing is in disadvantage over discrete ad- dressing. This is evident from the observation that the D-NTM with discrete addressing signiï¬cantly outperforms that with continuous addressing in the tasks of 8 - Lists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al., 2015), where discrete addressing was found to generalize better in the task of image caption generation.
In Table 2, we also observe that the D-NTM with the feedforward controller and discrete attention performs worse than LSTM and D-NTM with continuous-attention. However, when the proposed curriculum strategy from Sec. 3.2 is used, the average test error drops from 68.30 to 37.79. | 1607.00036#33 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 34 | We empirically found training of the feedforward controller more difï¬cult than that of the recurrent controller. We train our feedforward controller based models four times longer (in terms of the number of updates) than the recurrent controller based ones in order to ensure that they are converged for most of the tasks. On the other hand, the models trained with the GRU controller overï¬t on bAbI tasks very quickly. For example, on tasks 3 and 16 the feedforward controller based model underï¬ts (i.e., high training loss) at the end of the training, whereas with the same number of units the model with the GRU controller can overï¬t on those tasks after 3,000 updates only.
We notice a signiï¬cant performance gap, when our results are compared to the vari- ants of the memory network (Weston et al., 2015b) (MemN2N and DMN+). We at12
tribute this gap to the difï¬culty in learning to manipulate and store a complex input. | 1607.00036#34 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 35 | tribute this gap to the difï¬culty in learning to manipulate and store a complex input.
Graves et al. (2016) also has also reported results with differentiable neural com- puter (DNC) and NTM on bAbI dataset. However their experimental setup is different from the setup we use in this paper. This makes the comparisons between more difï¬- cult. The main differences broadly are, as the input representations to the controller, they used the embedding representation of each word whereas we have used the rep- resentation obtained with GRU for each fact. Secondly, they report only joint training results. However, we have only trained our models on the individual tasks separately. However, despite the differences in terms of architecture in DNC paper (see Table 1), the mean results of their NTM results is very close to ours 28.5% with std of +/- 2.9 which we obtain 31.4% error. | 1607.00036#35 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 36 | Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg.Err. FF Soft D-NTM 4.38 27.5 71.25 0.00 1.67 1.46 6.04 1.70 0.63 19.80 0.00 6.25 7.5 17.5 0.0 49.65 1.25 0.24 39.47 0.0 12.81 FF Discrete D-NTM 81.67 76.67 79.38 78.65 83.13 48.76 54.79 69.75 39.17 56.25 78.96 82.5 75.0 78.75 71.42 71.46 43.75 48.13 71.46 76.56 68.30 FF Discreteâ D-NTM 14.79 76.67 70.83 44.06 17.71 48.13 23.54 35.62 14.38 56.25 39.58 32.08 18.54 24.79 39.73 71.15 43.75 2.92 71.56 9.79 37.79
Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with feedforward controller.
# 6.4 Visualization of Discrete Attention | 1607.00036#36 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 37 | Table 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with feedforward controller.
# 6.4 Visualization of Discrete Attention
We visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. From this example, we can see that D-NTM has learned to ï¬nd the correct supporting fact even without any supervision for the particular story in the visualization.
# 6.5 Learning Curves for the Recurrent Controller
In Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM model with recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster than the continuous-attention model. The main difï¬culty of learning continuous-attention is due to the fact that learning to write with continuous- attention can be challenging.
13
Antoine is bored Jason is hungry Jason travelled to the kitchen Antoine travelled to the garden Write Read Jason got the apple there Yann is tired Yann journeyed to the bedroom Why did yan go to the bedroom ? | 1607.00036#37 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 38 | Figure 2: An example view of the discrete attention over the memory slots for both read (left) and write heads(right). x-axis the denotes the memory locations that are being accessed and y-axis corresponds to the content in the particular memory location. In this ï¬gure, we visualize the discrete-attention model with 3 reading steps and on task 20. It is easy to see that the NTM with discrete-attention accesses to the relevant part of the memory. We only visualize the last-step of the three steps for writing. Because with discrete attention usually the model just reads the empty slots of the memory.
30 ââ Train nll hard attention model ââ Train nll soft attention model
Figure 3: A visualization for the learning curves of continuous and discrete D-NTM models trained on Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controller does converge faster than the continuous-attention model.
14
# 6.6 Training with Continuous Attention and Testing with Discrete Attention | 1607.00036#38 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 39 | 14
# 6.6 Training with Continuous Attention and Testing with Discrete Attention
In Table 3, we provide results to investigate the effects of using discrete attention model at the test-time for a model trained with feedforward controller and continuous attention. Discreteâ D-NTM model bootstraps the discrete attention with the continuous attention, using the curriculum method that we have introduced in Section 4.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time. We observe that the Discreteâ D-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model.
continuous Discrete Discreteâ Discreteâ D-NTM D-NTM D-NTM D-NTM 14.79 4.38 76.67 27.5 70.83 71.25 44.06 0.00 17.71 1.67 48.13 1.46 23.54 6.04 35.62 1.70 14.38 0.63 56.25 19.80 39.58 0.00 32.08 6.25 18.54 7.5 24.79 17.5 39.73 0.0 71.15 49.65 43.75 1.25 2.92 0.24 71.56 39.47 9.79 0.0 12.81 37.79 | 1607.00036#39 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 40 | Table 3: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the feedforward controller. Discreteâ D-NTM model bootstraps the dis- crete attention with the continuous attention, using the curriculum method that we have introduced in Section 3.2. Discreteâ D-NTM model is the continuous-attention model which uses discrete-attention at the test time.
# 6.7 D-NTM with BoW Fact Representation
In Table 4, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al. (2015) as the representation of the input facts. The facts representa- tions are provided as an input to the GRU controller. In agreement to our results with the GRU fact representation, with the BoW fact representation we observe improvements with multi-step of addressing over single-step and discrete addressing over continuous addressing.
15
Task D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Avg | 1607.00036#40 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 41 | Table 4: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU controller and representations of facts are obtained with BoW using positional encoding.
# 7 Experiments on Sequential pMNIST
In sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order, left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predicts the label of the digit in the sequence of pixels. We ex- periment D-NTM on the variation of sequential MNIST where the order of the pixels is randomly shufï¬ed, we call this task as permuted MNIST (pMNIST). An important contribution of this task to our paper, in particular, is to measure the modelâs ability to perform well when dealing with long-term dependencies. We report our results in Ta- ble 5, we observe improvements over other models that we compare against. In Table 5, âdiscrete addressing with MABâ refers to D-NTM model using REINFORCE with baseline computed from moving averages of the reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline. | 1607.00036#41 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 42 | In Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE with moving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in general is much easier to optimize and converges faster as well. But it can quickly overï¬t to the task as well. Let us note that, recurrent batch normalization with LSTM (Cooijmans et al., 2017) with 95.6% accuracy and it per- forms much better than other algorithms. However, it is possible to use recurrent batch normalization in our model and potentially improve our results on this task as well.
In all our experiments on sequential MNIST task, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each
16 | 1607.00036#42 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 43 | In all our experiments on sequential MNIST task, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each
16
D-NTM discrete MAB D-NTM discrete IB Soft D-NTM NTM 89.6 92.3 93.4 90.9 I-RNN (Le et al., 2015) Zoneout (Krueger et al., 2016) LSTM (Krueger et al., 2016) Unitary-RNN (Arjovsky et al., 2016) Recurrent Dropout (Krueger et al., 2016) Recurrent Batch Normalization (Cooijmans et al., 2017) 82.0 93.1 89.8 91.4 92.5 95.6
Table 5: Sequential pMNIST.
25
ââ validation learning curve of ibb â ââ validation learning curve of mab h ---- training learning curve of ibb ---- training learning curve of mab 2.0
Figure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNIST task with input-based baseline and regular REINFORCE baseline. The x- axis is the loss and y-axis is the number of epochs.
17 | 1607.00036#43 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 44 | 17
content vector of size 8 and with address vectors of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models.
# 8 Stanford Natural Language Inference (SNLI) Task
SNLI task (Bowman et al., 2015) is designed to test the abilities of different ma- chine learning algorithms for inferring the entailment between two different statements. Those two statements, can either entail, contradict or be neutral to each other. In this pa- per, we feed the premise followed by the end of premise (EOP) token and the hypothesis in the same sequence as an input to the model. Similarly Rockt¨aschel et al. (2015) have trained their model by providing the premise and the hypothesis in a similar way. This ensures that the performance of our model does not rely only on a particular prepro- cessing or architectural engineering. But rather we mainly rely on the modelâs ability to represent the sequence and the dependencies in the input sequence efï¬ciently. The model proposed by Rockt¨aschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis.
In Table 6, we report results for different models with or without recurrent dropout (Semeniuta et al., 2016) and layer normalization (Ba et al., 2016). | 1607.00036#44 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 45 | In Table 6, we report results for different models with or without recurrent dropout (Semeniuta et al., 2016) and layer normalization (Ba et al., 2016).
The number of input vocabulary we use in our paper is 41200, we use GLOVE (Pen- nington et al., 2014) embeddings to initialize the input embeddings. We use GRU- controller with 300 units and the size of the embeddings are also 300. We optimize our models with Adam. We have done a hyperparameter search to ï¬nd the optimal learning rate via random search and sampling the learning rate from log-space between 1e â 2 and 1e â 4 for each model. We use layer-normalization in our controller (Ba et al., 2016).
We have observed signiï¬cant improvements by using layer normalization and dropout on this task. Mainly because that the overï¬tting is a severe problem on SNLI. D-NTM achieves better performance compared to both LSTM and NTMs. | 1607.00036#45 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 46 | Test Acc Word by Word Attention(Rockt¨aschel et al., 2015) Word by Word Attention two-way(Rockt¨aschel et al., 2015) LSTM + LayerNorm + Dropout NTM + LayerNorm + Dropout DNTM + LayerNorm + Dropout LSTM (Bowman et al., 2015) D-NTM NTM 83.5 83.2 81.7 81.8 82.3 77.6 80.9 80.2
Table 6: Stanford Natural Language Inference Task
18
# 9 NTM Toy Tasks
We explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associative recall tasks. We train our model on the same lengths of sequences that is experimented in (Graves et al., 2014). We report our results in Table 7. We ï¬nd out that D-NTM using continuous-attention can successfully learn the âCopyâ and âAssociative Recallâ tasks. | 1607.00036#46 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 47 | In Table 7, we train our model on sequences of the same length as the experiments in (Graves et al., 2014) and test the model on the sequences of the maximum length seen during the training. We consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task. Because empirically we observe that the mod- els have higher validation costs perform badly in terms of generalization over the longer sequences. âD-NTM discreteâ model in this table is trained with REINFORCE using moving averages to estimate the baseline.
Copy Tasks Associative Recall Soft D-NTM D-NTM discrete NTM Success Success Success Success Failure Success
Table 7: NTM Toy Tasks.
On both copy and associative recall tasks, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and using address vector of size 8. We use a learning rate of 1e â 3 and trained the model with Adam optimizer. We did not use the read and write consistency regularization in any of our models. For the model with the discrete attention we use REINFORCE with baseline computed using moving averages. | 1607.00036#47 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 48 | # 10 Conclusion and Future Work
In this paper we extend neural Turing machines (NTM) by introducing a learnable ad- dressing scheme which allows the NTM to be capable of performing highly nonlinear location-based addressing. This extension, to which we refer by dynamic NTM (D- NTM), is extensively tested with various conï¬gurations, including different addressing mechanisms (continuous vs. discrete) and different number of addressing steps, on the Facebook bAbI tasks. This is the ï¬rst time an NTM-type model was tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs better than vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, dis- crete addressing works better than the continuous addressing with the GRU controller, and our analysis reveals that this is the case when the task requires precise retrieval of memory content.
Our experiments show that the NTM-based models can be weaker than other vari- ants of memory networks which do not learn but have an explicit mechanism of storing
19 | 1607.00036#48 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 49 | Our experiments show that the NTM-based models can be weaker than other vari- ants of memory networks which do not learn but have an explicit mechanism of storing
19
incoming facts as they are. We conjecture that this is due to the difï¬culty in learning how to write, manipulate and delete the content of memory. Despite this difï¬culty, we ï¬nd the NTM-based approach, such as the proposed D-NTM, to be a better, future-proof approach, because it can scale to a much longer horizon (where it becomes impossible to explicitly store all the experiences.)
On pMNIST task, we show that our model can outperform other similar type of approaches proposed to deal with the long-term dependencies. On copy and associa- tive recall tasks, we show that our model can solve the algorithmic problems that are proposed to solve with NTM type of models.
Finally we have shown some results on the SNLI task where our model performed better than NTM and the LSTM on this task. However our results do not involve any task speciï¬c modiï¬cations and the results can be improved further by structuring the architecture of our model according to the SNLI task. | 1607.00036#49 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 50 | The success of both the learnable address and the discrete addressing scheme sug- gests two future research directions. First, we should try both of these schemes in a wider array of memory-based models, as they are not speciï¬c to the neural Turing ma- chines. Second, the proposed D-NTM needs to be evaluated on a diverse set of applica- tions, such as text summarization (Rush et al., 2015), visual question-answering (Antol et al., 2015) and machine translation, in order to make a more concrete conclusion.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: visual question answering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2425â2433, 2015.
Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. ICML 2016, 2016.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. | 1607.00036#50 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 51 | Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Con- ference on Representation Learning (ICLR 2015), 2015.
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2): 157â166, 1994.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015.
20
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. | 1607.00036#51 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 52 | Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder- decoder for statistical machine translation. In EMNLP, 2014.
Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua arXiv preprint Bengio. arXiv:1506.07503, 2015. Attention-based models for speech recognition.
Tim Cooijmans, Nicolas Ballas, C´esar Laurent, and Aaron Courville. Recurrent batch normalization. ICLR 2017, Toullone France, 2017.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexan- der Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in prepa- ration for MIT Press, 2016. URL http://www.deeplearningbook.org. | 1607.00036#52 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 53 | Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ra- malho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471â476, 2016.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015.
Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. ICML 2016, New York, 2016.
Karl Moritz Hermann, Tom´aËs KoËcisk`y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340, 2015. | 1607.00036#53 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 54 | Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks princi- ple: Reading childrenâs books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.
Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Tech- nische Universit¨at M¨unchen, page 91, 1991.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computa- tion, 9(8):1735â1780, 1997.
21
Peter J. Huber. Robust estimation of a location parameter. Ann. Math. Statist., 35(1): 73â101, 03 1964.
Inferring algorithmic patterns with stack- augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. | 1607.00036#54 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 55 | Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Bal- las, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activa- tions. arXiv preprint arXiv:1606.01305, 2016.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings Of The Conference on Empirical Methods for Natural Language Processing (EMNLP 2015), 2015.
Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126. | 1607.00036#55 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 56 | Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. International Conference on Machine Learning, ICML, 2014.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807â814, 2010.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vec- tors for word representation. In EMNLP, volume 14, pages 1532â1543, 2014.
Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in NIPS. 2016.
Scott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR 2016, 2016.
Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aËs KoËcisk`y, and Phil Blunsom. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. | 1607.00036#56 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 57 | Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 379â389, 2015.
22
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- ICML 2016, licrap. One-shot learning with memory-augmented neural networks. 2016.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelli- gence (AAAI-16), 2016.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end mem- ory networks. arXiv preprint arXiv:1503.08895, 2015. | 1607.00036#57 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 58 | Guo-Zheng Sun, C. Lee Giles, and Hsing-Hen Chen. The neural network pushdown au- tomaton: Architecture, dynamics and training. In Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, pages 296â 345, 1997.
Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai- arXiv preprint complete question answering: a set of prerequisite toy tasks. arXiv:1502.05698, 2015a.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015b. In Press.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â256, 1992.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016. | 1607.00036#58 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1607.00036 | 59 | Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Represen- tation Learning (ICLR 2015), 2015.
Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal struc- ture. In Computer Vision (ICCV), 2015 IEEE International Conference on. IEEE, 2015.
23
Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015.
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015. | 1607.00036#59 | Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes | We extend neural Turing machine (NTM) model into a dynamic neural Turing
machine (D-NTM) by introducing a trainable memory addressing scheme. This
addressing scheme maintains for each memory cell two separate vectors, content
and address vectors. This allows the D-NTM to learn a wide variety of
location-based addressing strategies including both linear and nonlinear ones.
We implement the D-NTM with both continuous, differentiable and discrete,
non-differentiable read/write mechanisms. We investigate the mechanisms and
effects of learning to read and write into a memory through experiments on
Facebook bAbI tasks using both a feedforward and GRUcontroller. The D-NTM is
evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM
baselines. We have done extensive analysis of our model and different
variations of NTM on bAbI task. We also provide further experimental results on
sequential pMNIST, Stanford Natural Language Inference, associative recall and
copy tasks. | http://arxiv.org/pdf/1607.00036 | Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio | cs.LG, cs.NE | 13 pages, 3 figures | null | cs.LG | 20160630 | 20170317 | [
{
"id": "1511.02301"
},
{
"id": "1603.05118"
},
{
"id": "1506.07503"
},
{
"id": "1506.02075"
},
{
"id": "1509.06664"
},
{
"id": "1504.00941"
},
{
"id": "1606.01305"
},
{
"id": "1502.05698"
},
{
"id": "1510.03931"
},
{
"id": "1602.08671"
},
{
"id": "1506.03340"
},
{
"id": "1503.08895"
},
{
"id": "1508.05326"
},
{
"id": "1607.06450"
},
{
"id": "1605.07427"
},
{
"id": "1511.07275"
},
{
"id": "1506.05869"
}
] |
1606.09274 | 1 | # Abstract
Neural Machine Translation (NMT), like many other deep learning domains, typ- ically suffers from over-parameterization, resulting in large storage sizes. This paper examines three simple magnitude-based pruning schemes to compress NMT mod- els, namely class-blind, class-uniform, and class-distribution, which differ in terms of how pruning thresholds are com- puted for the different classes of weights in the NMT architecture. We demonstrate the efï¬cacy of weight pruning as a compres- sion technique for a state-of-the-art NMT system. We show that an NMT model with over 200 million parameters can be pruned by 40% with very little performance loss as measured on the WMTâ14 English- German translation task. This sheds light on the distribution of redundancy in the NMT architecture. Our main result is that with retraining, we can recover and even surpass the original performance with an 80%-pruned model.
# Introduction | 1606.09274#1 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 2 | # Introduction
Neural Machine Translation (NMT) is a simple new architecture for translating texts from one lan- guage into another (Sutskever et al., 2014; Cho et al., 2014). NMT is a single deep neural network that is trained end-to-end, holding several advan- tages such as the ability to capture long-range de- pendencies in sentences, and generalization to un- seen texts. Despite being relatively new, NMT has already achieved state-of-the-art translation re- sults for several language pairs including English- French (Luong et al., 2015b), English-German (Jean et al., 2015a; Luong et al., 2015a; Luong and
âBoth authors contributed equally.
target language output âââ_ Je suis étudiant â IT | Je suis étudiant i J Y | am a student rT Tl Y source language input target language input
Figure 1: A simpliï¬ed diagram of NMT.
Manning, 2015; Sennrich et al., 2016), English- Turkish (Sennrich et al., 2016), and English-Czech (Jean et al., 2015b; Luong and Manning, 2016). Figure 1 gives an example of an NMT system. | 1606.09274#2 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 3 | While NMT has a signiï¬cantly smaller memory footprint than traditional phrase-based approaches (which need to store gigantic phrase-tables and language models), the model size of NMT is still prohibitively large for mobile devices. For exam- ple, a recent state-of-the-art NMT system requires over 200 million parameters, resulting in a stor- age size of hundreds of megabytes (Luong et al., 2015a). Though the trend for bigger and deeper neural networks has brought great progress, it has also introduced over-parameterization, resulting in long running times, overï¬tting, and the storage size issue discussed above. A solution to the over- parameterization problem could potentially aid all three issues, though the ï¬rst (long running times) is outside the scope of this paper.
In this paper we investi- gate the efï¬cacy of weight pruning for NMT as a means of compression. We show that despite | 1606.09274#3 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 4 | In this paper we investi- gate the efï¬cacy of weight pruning for NMT as a means of compression. We show that despite
its simplicity, magnitude-based pruning with re- training is highly effective, and we compare three magnitude-based pruning schemes â class-blind, class-uniform and class-distribution. Though re- cent work has chosen to use the latter two, we ï¬nd the ï¬rst and simplest scheme â class-blind â the most successful. We are able to prune 40% of the weights of a state-of-the-art NMT system with negligible performance loss, and by adding a retraining phase after pruning, we can prune 80% with no performance loss. Our pruning experi- ments also reveal some patterns in the distribution of redundancy in NMT. In particular we ï¬nd that higher layers, attention and softmax weights are the most important, while lower layers and the em- bedding weights hold a lot of redundancy. For the Long Short-Term Memory (LSTM) architecture, we ï¬nd that at lower layers the parameters for the input are most crucial, but at higher layers the pa- rameters for the gates also become important.
# 2 Related Work | 1606.09274#4 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 5 | Pruning the parameters from a neural network, referred to as weight pruning or network prun- ing, is a well-established idea though it can be implemented in many ways. Among the most popular are the Optimal Brain Damage (OBD) (Le Cun et al., 1989) and Optimal Brain Sur- geon (OBS) (Hassibi and Stork, 1993) techniques, which involve computing the Hessian matrix of the loss function with respect to the parameters, in order to assess the saliency of each parame- ter. Parameters with low saliency are then pruned from the network and the remaining sparse net- work is retrained. Both OBD and OBS were shown to perform better than the so-called ânaive magnitude-based approachâ, which prunes param- eters according to their magnitude (deleting pa- rameters close to zero). However, the high com- putational complexity of OBD and OBS compare unfavorably to the computational simplicity of the magnitude-based approach, especially for large networks (Augasta and Kathirvalavakumar, 2013). In recent years, the deep learning renaissance has prompted a re-investigation of network prun- ing for modern models and tasks. | 1606.09274#5 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 7 | Han et al. (2015b) prune 89% of AlexNet parame- ters with no accuracy loss on the ImageNet task.
Other approaches focus on pruning neurons rather than parameters, via sparsity-inducing regu- larizers (Murray and Chiang, 2015) or âwiring to- getherâ pairs of neurons with similar input weights (Srinivas and Babu, 2015). These approaches are much more constrained than weight-pruning schemes; they necessitate ï¬nding entire zero rows of weight matrices, or near-identical pairs of rows, in order to prune a single neuron. By contrast weight-pruning approaches allow weights to be pruned freely and independently of each other. The neuron-pruning approach of Srinivas and Babu (2015) was shown to perform poorly (it suf- fered performance loss after removing only 35% of AlexNet parameters) compared to the weight- pruning approach of Han et al. (2015b). Though Murray and Chiang (2015) demonstrates neuron- pruning for language modeling as part of a (non- neural) Machine Translation pipeline, their ap- proach is more geared towards architecture selec- tion than compression. | 1606.09274#7 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 8 | There are many other compression techniques for neural networks, including approaches based on on low-rank approximations for weight matri- ces (Jaderberg et al., 2014; Denton et al., 2014), or weight sharing via hash functions (Chen et al., 2015). Several methods involve reducing the pre- cision of the weights or activations (Courbariaux et al., 2015), sometimes in conjunction with spe- cialized hardware (Gupta et al., 2015), or even us- ing binary weights (Lin et al., 2016). The âknowl- edge distillationâ technique of Hinton et al. (2015) involves training a small âstudentâ network on the soft outputs of a large âteacherâ network. Some approaches use a sophisticated pipeline of several techniques to achieve impressive feats of compres- sion (Han et al., 2015a; Iandola et al., 2016). | 1606.09274#8 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 9 | Most of the above work has focused on com- pressing CNNs for vision tasks. We extend the magnitude-based pruning approach of Han et al. (2015b) to recurrent neural networks (RNN), in particular LSTM architectures for NMT, and to our knowledge we are the ï¬rst to do so. There has been some recent work on compression for RNNs (Lu et al., 2016; Prabhavalkar et al., 2016), but it focuses on other, non-pruning compression techniques. Nonetheless, our general observations on the distribution of redundancy in a LSTM, de- tailed in Section 4.5, are corroborated by Lu et al.
# target language output ââ | 1606.09274#9 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 10 | # target language output ââ
seb 7 one-hot vectors Je suis étudiant â } length V » » A» » context vector 5 ; (one for each | scores Key to weight classes target word) length V softmax weights length n * * size: Vxn a initial (zero) | attention hidden attention states length n weights A A TAY . size: nx 2n . source â> target â> , | pen ayer 2 layer 2 layer 2 J weights weights size: 4n x 2n size: 4n x 2n . hidden layer 1 source â> target â> F ten th ne layer 1 layer 1 J weights weights size: 4n x 2n size: 4n x 2n | word embeddings length n source embedding target embedding 4 - weights | weights t i] i] i} size: nx V size: nx V 7 1am a student â Je _â suis étudiant =} toate yes N J \ J Y Y source language input target language
Figure 2: NMT architecture. This example has two layers, but our system has four. The different weight classes are indicated by arrows of different color (the black arrows in the top right represent simply choosing the highest-scoring word, and thus require no parameters). Best viewed in color.
(2016).
# 3 Our Approach | 1606.09274#10 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 11 | (2016).
# 3 Our Approach
We ï¬rst give a brief overview of Neural Ma- chine Translation before describing the model ar- chitecture of interest, the deep multi-layer recur- rent model with LSTM. We then explain the dif- ferent types of NMT weights together with our ap- proaches to pruning and retraining.
# 3.1 Neural Machine Translation
Neural machine translation aims to directly model the conditional probability p(y|x) of translating a source sentence, x1, . . . , xn, to a target sentence, y1, . . . , ym. It accomplishes this goal through an encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). The encoder computes a representation s for each source sentence. Based on that source representation, the decoder generates a transla- tion, one target word at a time, and hence, decom- poses the log conditional probability as:
log p(yl) = 32" logp (vely<e,8)
Most NMT work uses RNNs, but approaches (a) architecture, which can | 1606.09274#11 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 12 | log p(yl) = 32" logp (vely<e,8)
Most NMT work uses RNNs, but approaches (a) architecture, which can
be unidirectional, bidirectional, or deep multi- layer RNN; and (b) RNN type, which can be Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) or the Gated Recurrent Unit (Cho et al., 2014).
In this work, we speciï¬cally consider the deep multi-layer recurrent architecture with LSTM as the hidden unit type. Figure 1 illustrates an in- stance of that architecture during training in which the source and target sentence pair are input for su- pervised learning. During testing, the target sen- tence is not known in advance; instead, the most probable target words predicted by the model are fed as inputs into the next timestep. The network stops when it emits the end-of-sentence symbol â a special âwordâ in the vocabulary, represented by a dash in Figure 1.
# 3.2 Understanding NMT Weights | 1606.09274#12 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 13 | # 3.2 Understanding NMT Weights
Figure 2 shows the same system in more detail, highlighting the different types of parameters, or weights, in the model. We will go through the architecture from bottom to top. First, a vocab- ulary is chosen for each language, assuming that the top V frequent words are selected. Thus, ev- ery word in the source or target vocabulary can be represented by a one-hot vector of length V .
# layer
The source input sentence and target input sen- tence, represented as a sequence of one-hot vec- tors, are transformed into a sequence of word em- beddings by the embedding weights. These em- bedding weights, which are learned during train- ing, are different for the source words and the tar- get words. The word embeddings and all hidden layers are vectors of length n (a chosen hyperpa- rameter). | 1606.09274#13 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 14 | The word embeddings are then fed as input into the main network, which consists of two multi- layer RNNs âstuck togetherâ â an encoder for the source language and a decoder for the target lan- guage, each with their own weights. The feed- forward (vertical) weights connect the hidden unit from the layer below to the upper RNN block, and the recurrent (horizontal) weights connect the hid- den unit from the previous time-step RNN block to the current time-step RNN block.
The hidden state at the top layer of the decoder is fed through an attention layer, which guides the translation by âpaying attentionâ to relevant parts of the source sentence; for more information see Bahdanau et al. (2015) or Section 3 of Luong et al. (2015a). Finally, for each target word, the top layer hidden unit is transformed by the softmax weights into a score vector of length V . The tar- get word with the highest score is selected as the output translation. | 1606.09274#14 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 15 | Weight Subgroups in LSTM â For the afore- mentioned RNN block, we choose to use LSTM as the hidden unit type. To facilitate our later discus- sion on the different subgroups of weights within LSTM, we ï¬rst review the details of LSTM as for- mulated by Zaremba et al. (2014) as follows:
i sigm f | â | sigm nit o} | sigm Tan.2n hi, 2) h tanh d=fod_,+ioh (3) hi, = 00 tanh(c}) (4)
Here, each LSTM block at time t and layer l com- putes as output a pair of hidden and memory vec- t) given the previous pair (hl tors (hl tâ1) and an input vector hlâ1 (either from the LSTM block below or the embedding weights if l = 1). All of these vectors have length n.
The core of a LSTM block is the weight matrix T4n,2n of size 4n à 2n. This matrix can be decom- posed into 8 subgroups that are responsible for the
interactions between {input gate i, forget gate f , output gate o, input signal Ëh} Ã {feed-forward in- put hlâ1 t
# 3.3 Pruning Schemes | 1606.09274#15 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 16 | # 3.3 Pruning Schemes
We follow the general magnitude-based approach of Han et al. (2015b), which consists of pruning weights with smallest absolute value. However, we question the authorsâ pruning scheme with re- spect to the different weight classes, and exper- iment with three pruning schemes. Suppose we wish to prune x% of the total parameters in the model. How do we distribute the pruning over the different weight classes (illustrated in Figure 2) of our model? We propose to examine three different pruning schemes:
1. Class-blind: Take all parameters, sort them by magnitude and prune the x% with smallest (So magnitude, regardless of weight class. some classes are pruned proportionally more than others).
2. Class-uniform: Within each class, sort the weights by magnitude and prune the x% with smallest magnitude. (So all classes have ex- actly x% of their parameters pruned).
3. Class-distribution: For each class c, weights with magnitude less than λÏc are pruned. Here, Ïc is the standard deviation of that class and λ is a universal parameter chosen such that in total, x% of all parameters are pruned. This is used by Han et al. (2015b). | 1606.09274#16 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 17 | All these schemes have their seeming advantages. Class-blind pruning is the simplest and adheres to the principle that pruning weights (or equivalently, setting them to zero) is least damaging when those weights are small, regardless of their locations in the architecture. Class-uniform pruning and class- distribution pruning both seek to prune proportion- ally within each weight class, either absolutely, or relative to the standard deviation of that class. We ï¬nd that class-blind pruning outperforms both other schemes (see Section 4.1).
# 3.4 Retraining
In order to prune NMT models aggressively with- out performance loss, we retrain our pruned net- works. That is, we continue to train the remaining weights, but maintain the sparse structure intro- duced by pruning. In our implementation, pruned
20 e r o c s U E L B 10 class-blind class-uniform class-distribution 0 0 10 20 30 40 50 60 70 80 90 percentage pruned
Figure 3: Effects of different pruning schemes. | 1606.09274#17 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 18 | Figure 3: Effects of different pruning schemes.
weights are represented by zeros in the weight ma- trices, and we use binary âmaskâ matrices, which represent the sparse structure of a network, to ig- nore updates to weights at pruned locations. This implementation has the advantage of simplicity as it requires minimal changes to the training and deployment code, but we note that a more complex implementation utilizing sparse matrices and sparse matrix multiplication could potentially yield speed improvements. However, such an im- plementation is beyond the scope of this paper.
# 4 Experiments | 1606.09274#18 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 19 | # 4 Experiments
We evaluate the effectiveness of our pruning approaches on a state-of-the-art NMT model.1 Speciï¬cally, an attention-based English-German NMT system from Luong et al. (2015a) is consid- ered. Training data was obtained from WMTâ14 consisting of 4.5M sentence pairs (116M English words, 110M German words). For more details on training hyperparameters, we refer readers to Section 4.1 of Luong et al. (2015a). All models are tested on newstest2014 (2737 sentences). The model achieves a perplexity of 6.1 and a BLEU score of 20.5 (after unknown word replacement).2 When retraining pruned NMT systems, we use the following settings: (a) we start with a smaller learning rate of 0.5 (the original model uses a learning rate of 1.0), (b) we train for fewer epochs, 4 instead of 12, using plain SGD, (c) a simple learning rate schedule is employed; after 2 epochs, we begin to halve the learning rate every half an epoch, and (d) all other hyperparameters are the | 1606.09274#19 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 21 | # 4.1 Comparing pruning schemes
Despite its simplicity, we observe in Figure 3 that class-blind pruning outperforms both other schemes in terms of translation quality at all prun- ing percentages. In order to understand this result, for each of the three pruning schemes, we pruned each class separately and recorded the effect on performance (as measured by perplexity). Figure 4 shows that with class-uniform pruning, the over- all performance loss is caused disproportionately by a few classes: target layer 4, attention and soft- max weights. Looking at Figure 5, we see that the most damaging classes to prune also tend to be those with weights of greater magnitude â these classes have much larger weights than others at the same percentile, so pruning them under the class- uniform pruning scheme is more damaging. The situation is similar for class-distribution pruning. By contrast, Figure 4 shows that under class- blind pruning, the damage caused by pruning soft- max, attention and target layer 4 weights is greatly decreased, and the contribution of each class to- wards the performance loss is overall more uni- form. In fact, the distribution begins to reï¬ect the number of parameters in each class â for ex- ample, the source and target embedding classes have larger contributions because they have more weights. We use only class-blind pruning for the rest of the experiments. | 1606.09274#21 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 22 | Figure 4 also reveals some interesting informa- tion about the distribution of redundancy in NMT architectures â namely it seems that higher lay- ers are more important than lower layers, and that attention and softmax weights are crucial. We will explore the distribution of redundancy further in Section 4.5.
# 4.2 Pruning and retraining
Pruning has an immediate negative impact on per- formance (as measured by BLEU) that is exponen- tial in pruning percentage; this is demonstrated by the blue line in Figure 6. However we ï¬nd that up to about 40% pruning, performance is mostly un- affected, indicating a large amount of redundancy and over-parameterization in NMT.
We now consider the effect of retraining pruned models. The orange line in Figure 6 shows that af- ter retraining the pruned models, baseline perfor- mance (20.48 BLEU) is both recovered and im15 10 class-blind class-uniform class-distribution 5 0 sourcelayer1 sourcelayer2 sourcelayer3 sourcelayer4 targetlayer1 targetlayer2 targetlayer3 targetlayer4 attention softm ax sourcee m bedding targete m bedding
# e g n a h c
y t i x e l p r e p | 1606.09274#22 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 23 | # e g n a h c
y t i x e l p r e p
Figure 4: âBreakdownâ of performance loss (i.e., perplexity increase) by weight class, when pruning 90% of weights using each of the three pruning schemes. Each of the ï¬rst eight classes have 8 million weights, attention has 2 million, and the last three have 50 million weights each.
e g n a h c y t i x e l p r e p 101 100 0 0.1 0.2 0.3 0.4 magnitude of largest deleted weight 0.5
20 10 0 0 pruned pruned and retrained sparse from the beginning 10 20 30 40 50 60 70 80 90 percentage pruned
# e r o c s U E L B
Figure 5: Magnitude of largest deleted weight vs. perplexity change, for the 12 different weight classes when pruning 90% of parameters by class- uniform pruning.
Figure 6: Performance of pruned models (a) after pruning, (b) after pruning and retraining, and (c) when trained with sparsity structure from the out- set (see Section 4.3). | 1606.09274#23 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 24 | proved upon, up to 80% pruning (20.91 BLEU), with only a small performance loss at 90% pruning (20.13 BLEU). This may seem surprising, as we might not expect a sparse model to signiï¬cantly out-perform a model with ï¬ve times as many pa- rameters. There are several possible explanations, two of which are given below.
Firstly, we found that the less-pruned models perform better on the training set than the vali- dation set, whereas the more-pruned models have closer performance on the two sets. This indicates that pruning has a regularizing effect on the re- training phase, though clearly more is not always better, as the 50% pruned and retrained model has better validation set performance than the 90%
pruned and retrained model. Nonetheless, this reg- ularization effect may explain why the pruned and retrained models outperform the baseline. | 1606.09274#24 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 25 | pruned and retrained model. Nonetheless, this reg- ularization effect may explain why the pruned and retrained models outperform the baseline.
Alternatively, pruning may serve as a means to escape a local optimum. Figure 7 shows the loss function over time during the training, pruning and retraining process. During the original training process, the loss curve ï¬attens out and seems to converge (note that we use early stopping to ob- tain our baseline model, so the original model was trained for longer than shown in Figure 7). Prun- ing causes an immediate increase in the loss func- tion, but enables further gradient descent, allowing the retraining process to ï¬nd a new, better local optimum. It seems that the disruption caused by
most common word least common word ¢ > target embedding weights 00 a source embedeing weights source layer 1 weights source layer 2 weights source layer 3 weights source layer 4 weights input gate < forget gate < output gate < input < U| ~~ feed-forward recurrent + ~, Yi \ target layer 1 weights target layer 2 weights target layer 3 weights target layer 4 weights | 1606.09274#25 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 26 | Figure 8: Graphical representation of the location of small weights in various parts of the model. Black pixels represent weights with absolute size in the bottom 80%; white pixels represent those with absolute size in the top 20%. Equivalently, these pictures illustrate which parameters remain after pruning 80% using our class-blind pruning scheme.
8 6 s s o l 4 2 0 1 2 3 4 training iterations 5 ·105
Figure 7: The validation set loss during training, pruning and retraining. The vertical dotted line marks the point when 80% of the parameters are pruned. The horizontal dotted line marks the best performance of the unpruned baseline.
pruning is beneï¬cial in the long-run.
# 4.3 Starting with sparse models | 1606.09274#26 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 27 | pruning is beneï¬cial in the long-run.
# 4.3 Starting with sparse models
The favorable performance of the pruned and re- trained models raises the question: can we get a shortcut to this performance by starting with sparse models? That is, rather than train, prune, and retrain, what if we simply prune then train? To test this, we took the sparsity structure of our 50%â90% pruned models, and trained completely new models with the same sparsity structure. The purple line in Figure 6 shows that the âsparse from the beginningâ models do not perform as well as the pruned and retrained models, but they do come close to the baseline performance. This shows that while the sparsity structure alone contains useful information about redundancy and can therefore produce a competitive compressed model, it is im- portant to interleave pruning with training.
Though our method involves just one pruning stage, other pruning methods interleave pruning with training more closely by including several iterations (Collins and Kohli, 2014; Han et al., 2015b). We expect that implementing this for NMT would likely result in further compression and performance improvements.
# 4.4 Storage size | 1606.09274#27 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 29 | # 4.5 Distribution of redundancy in NMT
We visualize in Figure 8 the redundancy struc- tore of our NMT baseline model. Black pix- els represent weights near to zero (those that can be pruned); white pixels represent larger ones. First we consider the embedding weight matrices, whose columns correspond to words in the vocab- ulary. Unsurprisingly, in Figure 8, we see that the parameters corresponding to the less common words are more dispensable. In fact, at the 80% pruning rate, for 100 uncommon source words and 1194 uncommon target words, we delete all parameters corresponding to that word. This is not quite the same as removing the word from the vocabulary â true out-of-vocabulary words are mapped to the embedding for the âunknown wordâ symbol, whereas these âpruned-outâ words are mapped to a zero embedding. However in the original unpruned model these uncommon words already had near-zero embeddings, indicating that the model was unable to learn sufï¬ciently distinc- tive representations. | 1606.09274#29 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 30 | Returning to Figure 8, now look at the eight weight matrices for the source and target connec- tions at each of the four layers. Each matrix corre- sponds to the 4n à 2n matrix T4n,2n in Equation (2). In all eight matrices, we observe â as does Lu et al. (2016) â that the weights connecting to the input Ëh are most crucial, followed by the in- put gate i, then the output gate o, then the forget gate f . This is particularly true of the lower lay- ers, which focus primarily on the input Ëh. How- ever for higher layers, especially on the target side, weights connecting to the gates are as important as those connecting to the input Ëh. The gates repre- sent the LSTMâs ability to add to, delete from or retrieve information from the memory cell. Figure 8 therefore shows that these sophisticated memory cell abilities are most important at the end of the NMT pipeline (the top layer of the decoder). This is reasonable, as we expect higher-level features to be learned later in a deep learning pipeline. | 1606.09274#30 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 31 | We also observe that for lower layers, the feed- forward input is much more important than the re- current input, whereas for higher layers the recur- rent input becomes more important. This makes sense: lower layers concentrate on the low-level information from the current word embedding (the feed-forward input), whereas higher layers make
use of the higher-level representation of the sen- tence so far (the recurrent input).
Lastly, on close inspection, we notice several white diagonals emerging within some subsquares of the matrices in Figure 8, indicating that even without initializing the weights to identity ma- trices (as is sometimes done (Le et al., 2015)), an identity-like weight matrix is learned. At higher pruning percentages, these diagonals be- come more pronounced.
# 5 Generalizability of our results | 1606.09274#31 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 32 | # 5 Generalizability of our results
To test the generalizability of our results, we also test our pruning approach on a smaller, non- state-of-the-art NMT model trained on the WIT3 Vietnamese-English dataset (Cettolo et al., 2012), which consists of 133,000 sentence pairs. This model is effectively a scaled-down version of the state-of-the-art model in Luong et al. (2015a), with fewer layers, smaller vocabulary size, smaller hid- den layer size, no attention mechanism, and about 11% as many parameters in total. It achieves a BLEU score of 9.61 on the validation set.
Although this model and its training set are on a different scale to our main model, and the lan- guage pair is different, we found very similar re- sults. For this model, it is possible to prune 60% of parameters with no immediate performance loss, and with retraining it is possible to prune 90%, and regain original performance. Our main observa- tions from Sections 4.1 to 4.5 are also replicated; in particular, class-blind pruning is most success- ful, âsparse from the beginningâ models are less successful than pruned and retrained models, and we observe the same patterns as seen in Figure 8.
# 6 Future Work | 1606.09274#32 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 33 | # 6 Future Work
As noted in Section 4.3, including several itera- tions of pruning and retraining would likely im- prove the compression and performance of our If possible it would be highly pruning method. valuable to exploit the sparsity of the pruned mod- els to speed up training and runtime, perhaps through sparse matrix representations and mul- tiplications (see Section 3.4). Though we have found magnitude-based pruning to perform very well, it would be instructive to revisit the orig- inal claim that other pruning methods (for ex- ample Optimal Brain Damage and Optimal Brain Surgery) are more principled, and perform a com- parative study.
# 7 Conclusion | 1606.09274#33 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 34 | # 7 Conclusion
We have shown that weight pruning with retrain- ing is a highly effective method of compression and regularization on a state-of-the-art NMT sys- tem, compressing the model to 20% of its size with no loss of performance. Though we are the ï¬rst to apply compression techniques to NMT, we obtain a similar degree of compression to other current work on compressing state-of-the-art deep neural networks, with an approach that is simpler than most. We have found that the absolute size of pa- rameters is of primary importance when choosing which to prune, leading to an approach that is ex- tremely simple to implement, and can be applied to any neural network. Lastly, we have gained insight into the distribution of redundancy in the NMT architecture.
# 8 Acknowledgment
This work was partially supported by NSF Award IIS-1514268 and partially supported by a gift from Bloomberg L.P. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Lastly, we ac- knowledge NVIDIA Corporation for the donation of Tesla K40 GPUs.
# References | 1606.09274#34 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 35 | # References
M. Gethsiyal Augasta and Thangairulappan Kathir- valavakumar. 2013. Pruning algorithms of neural networks - a comparative study. Central European Journal of Computer Science, 3(3):105â115.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit3: Web inventory of transcribed and translated talks. In EAMT.
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In ICML.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua 2014. Learning phrase representations Bengio. using RNN encoder-decoder for statistical machine translation. In EMNLP.
Maxwell D. Collins and Pushmeet Kohli. 2014. Mem- ory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442. | 1606.09274#35 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 36 | Maxwell D. Collins and Pushmeet Kohli. 2014. Mem- ory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Training deep neural networks with low precision multiplications. In ICLR workshop.
Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting lin- ear structure within convolutional networks for efï¬- cient evaluation. In NIPS.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrish- nan, and Pritish Narayanan. 2015. Deep learning with limited numerical precision. In ICML.
Song Han, Huizi Mao, and William J Dally. 2015a. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huff- man coding. In ICLR.
Song Han, Jeff Pool, John Tran, and William Dally. 2015b. Learning both weights and connections for efï¬cient neural network. In NIPS.
Babak Hassibi and David G. Stork. 1993. Second or- der derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann. | 1606.09274#36 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 37 | Babak Hassibi and David G. Stork. 1993. Second or- der derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5MB model size. arXiv preprint arXiv:1602.07360.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. 2014. Speeding up convolutional neural net- works with low rank expansions. In NIPS.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On using very large target vocabulary for neural machine translation. In ACL. | 1606.09274#37 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 38 | S´ebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal neural machine translation systems for WMTâ15. In WMT.
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.
Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hin- ton. 2015. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941.
Yann Le Cun, John S. Denker, and Sara A. Solla. 1989. Optimal brain damage. In NIPS.
Zhouhan Lin, Matthieu Courbariaux, Roland Memise- vic, and Yoshua Bengio. 2016. Neural networks with few multiplications. In ICLR.
Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. 2016. Learning compact recurrent neural networks. In ICASSP.
Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL. | 1606.09274#38 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.09274 | 39 | Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- based neural machine translation. In EMNLP.
Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Address- ing the rare word problem in neural machine trans- lation. In ACL.
Kenton Murray and David Chiang. 2015. Auto-sizing neural networks: With applications to n-gram lan- guage models. In EMNLP.
Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, 2016. On the compression and Ian McGraw. of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition. In ICASSP.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL.
Suraj Srinivas and R. Venkatesh Babu. 2015. Data- free parameter pruning for deep neural networks. In BMVC. | 1606.09274#39 | Compression of Neural Machine Translation Models via Pruning | Neural Machine Translation (NMT), like many other deep learning domains,
typically suffers from over-parameterization, resulting in large storage sizes.
This paper examines three simple magnitude-based pruning schemes to compress
NMT models, namely class-blind, class-uniform, and class-distribution, which
differ in terms of how pruning thresholds are computed for the different
classes of weights in the NMT architecture. We demonstrate the efficacy of
weight pruning as a compression technique for a state-of-the-art NMT system. We
show that an NMT model with over 200 million parameters can be pruned by 40%
with very little performance loss as measured on the WMT'14 English-German
translation task. This sheds light on the distribution of redundancy in the NMT
architecture. Our main result is that with retraining, we can recover and even
surpass the original performance with an 80%-pruned model. | http://arxiv.org/pdf/1606.09274 | Abigail See, Minh-Thang Luong, Christopher D. Manning | cs.AI, cs.CL, cs.NE | Accepted to CoNLL 2016. 9 pages plus references | null | cs.AI | 20160629 | 20160629 | [
{
"id": "1602.07360"
},
{
"id": "1504.00941"
}
] |
1606.08415 | 0 | 3 2 0 2 n u J 6 ] G L . s c [
5 v 5 1 4 8 0 . 6 0 6 1 : v i X r a
# GAUSSIAN ERROR LINEAR UNITS (GELUS)
# Dan Hendrycksâ University of California, Berkeley [email protected]
Kevin Gimpel Toyota Technological Institute at Chicago [email protected]
# ABSTRACT
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU activation function is xΦ(x), where Φ(x) the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLUs (x1x>0). We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all considered computer vision, natural language processing, and speech tasks.
1
# INTRODUCTION | 1606.08415#0 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 0 | 0 2 0 2
l u J 3 2 ] I A . s c [
4 v 4 1 5 8 0 . 6 0 6 1 : v i X r a
# Towards Veriï¬ed Artiï¬cial Intelligence
Sanjit A. Seshiaâ, Dorsa Sadighâ , and S. Shankar Sastryâ
â Stanford University [email protected]
July 21, 2020
# Abstract
Veriï¬ed artiï¬cial intelligence (AI) is the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-speciï¬ed requirements. This paper considers Veriï¬ed AI from a formal methods perspective. We describe ï¬ve challenges for achieving Veriï¬ed AI, and ï¬ve corresponding principles for addressing these challenges.
# 1 Introduction | 1606.08514#0 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 1 | 1
# INTRODUCTION
Early artificial neurons utilized binary threshold units (Hopfield, 1982; McCulloch & Pitts, 1943). These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a âfir- ing rateâ interpretation and to train with backpropagation. But as networks became deeper, training with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair & Hinton, 2010) which makes hard gating decisions based upon an inputâs sign. Despite having less of a statistical motivation, the ReLU remains a competitive engineering solution which often enables faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modifi- cation called ELUs (Clevert et al., 2016) allows a ReLU-like nonlinearity to output negative values which sometimes increases training speed. In all, the activation choice has remained a necessary architecture decision for neural networks lest the network be a deep linear classifier. | 1606.08415#1 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 1 | # 1 Introduction
Artiï¬cial intelligence (AI) is a term used for computational systems that attempt to mimic aspects of human intelligence, including functions we intuitively associate with human minds such as âlearningâ and âproblem solvingâ (e.g., see [17]). Russell and Norvig [66] describe AI as the study of general principles of rational agents and components for constructing these agents. We interpret the term AI broadly to include closely- related areas such as machine learning (ML) [53]. Systems that heavily use AI, henceforth referred to as AI-based systems, have had a signiï¬cant impact in society in domains that include healthcare, transportation, ï¬nance, social networking, e-commerce, education, etc. This growing societal-scale impact has brought with it a set of risks and concerns including errors in AI software, cyber-attacks, and safety of AI-based systems [64, 21, 4]. Therefore, the question of veriï¬cation and validation of AI-based systems has begun to demand the attention of the research community. We deï¬ne âVeriï¬ed AIâ as the goal of designing AI- based systems that have strong, ideally provable, assurances of correctness with respect to mathematically- speciï¬ed requirements. How can we achieve this goal? | 1606.08514#1 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 2 | Deep nonlinear classifiers can fit their data so well that network designers are often faced with the choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Sri- vastava et al., 2014), and this choice remains separate from the activation function. Some stochastic regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bach- man et al., 2014), and can lead to marked accuracy increases. For example, the stochastic regular- izer dropout creates a pseudoensemble by randomly altering some activation decisions through zero multiplication. Nonlinearities and dropout thus determine a neuronâs output together, yet the two innovations have remained distinct. More, neither subsumed the other because popular stochastic regularizers act irrespectively of the input and nonlinearities are aided by such regularizers.
In this work, we introduce a new nonlinearity, the Gaussian Error Linear Unit (GELU). It relates to stochastic regularizers in that it is the expectation of a modification to Adaptive Dropout (Ba & Frey, 2013). This suggests a more probabilistic view of a neuronâs output. We find that this novel nonlinearity matches or exceeds models with ReLUs or ELUs across tasks from computer vision, natural language processing, and automatic speech recognition.
# 2 GELU FORMULATION | 1606.08415#2 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 2 | A natural starting point is to consider formal methods â a ï¬eld of computer science and engineering concerned with the rigorous mathematical speciï¬cation, design, and veriï¬cation of systems [86, 16]. At its core, formal methods is about proof: formulating speciï¬cations that form proof obligations, designing systems to meet those obligations, and verifying, via algorithmic proof search, that the systems indeed meet their speciï¬cations. A spectrum of formal methods, from speciï¬cation-driven testing and simulation [29], to model checking [14, 62, 15] and theorem proving (see, e.g. [58, 43, 37]) are used routinely in the computer- aided design of integrated circuits and have been widely applied to ï¬nd bugs in software, analyze embedded systems, and ï¬nd security vulnerabilities. At the heart of these advances are computational proof engines such as Boolean satisï¬ability (SAT) solvers [50], Boolean reasoning and manipulation routines based on Binary Decision Diagrams (BDDs) [9], and satisï¬ability modulo theories (SMT) solvers [6]. | 1606.08514#2 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 3 | # 2 GELU FORMULATION
We motivate our activation function by combining properties from dropout, zoneout, and ReLUs. First note that a ReLU and dropout both yield a neuronâs output with the ReLU deterministi- cally multiplying the input by zero or one and dropout stochastically multiplying by zero. Also, a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al., 2016). We merge this functionality by multiplying the input by zero or one, but the values of this zero-one mask are stochastically determined while also dependent upon the input. Specif- ically, we can multiply the neuron input x by m â¼ Bernoulli(Φ(x)), where Φ(x) = P (X â¤
âWork done while the author was at TTIC. Code available at github.com/hendrycks/GELUs
1 | 1606.08415#3 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 3 | In this paper, we consider the challenge of Veriï¬ed AI from a formal methods perspective. That is, we review the manner in which formal methods have traditionally been applied, analyze the challenges this approach may face for AI-based systems, and propose ideas to overcome these challenges. We emphasize that our discussion is focused on the role of formal methods and does not cover the broader set of techniques
1
that could be used to improve assurance in AI-based systems. Additionally, we seek to identify challenges applicable to a broad range of AI/ML systems, and not limited to speciï¬c technologies such as deep neural networks (DNNs) or reinforcement learning (RL) systems. Our view of the challenges is largely shaped by problems arising from the use of AI and ML in autonomous and semi-autonomous systems, though we believe the ideas presented here apply more broadly.
We begin in Sec. 2 with some brief background on formal veriï¬cation and an illustrative example. We then outline challenges for Veriï¬ed AI in Sec. 3 below, and describe ideas to address each of these challenges in Sec. 4.1
# 2 Background and Illustrative Example | 1606.08514#3 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 4 | âWork done while the author was at TTIC. Code available at github.com/hendrycks/GELUs
1
x), X â¼ N (0, 1) is the cumulative distribution function of the standard normal distribution. We choose this distribution since neuron inputs tend to follow a normal distribution, especially with Batch Normalization. In this setting, inputs have a higher probability of being âdroppedâ as x decreases, so the transformation applied to x is stochastic yet depends upon the input. Masking inputs in this fashion re- tains non-determinism but maintains dependency upon the input value. A stochastically chosen mask amounts to a stochastic zero or identity transforma- tion of the input. This is much like Adaptive Dropout (Ba & Frey, 2013), but adaptive dropout is used in tandem with nonlinearities and uses a logistic not standard normal distribution. We found that it is possible to train com- petitive MNIST and TIMIT networks solely with this stochastic regularizer, all without using any nonlinearity. | 1606.08415#4 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 4 | # 2 Background and Illustrative Example
Consider the typical formal veriï¬cation process as shown in Figure 1, which begins with the following three inputs: 1. A model of the system to be veriï¬ed, S; 2. A model of the environment, E, and 3. The property to be veriï¬ed, Φ. The veriï¬er generates as output a YES/NO answer, indicating whether or not S satisï¬es the property Φ in environment E. Typically, a NO output is accompanied by a counterexample, also called an error trace, which is an execution of the system that indicates how Φ is violated. Some formal veriï¬cation tools also include a proof or certiï¬cate of correctness with a YES answer. In this paper, we take a broad view of
Property
co) YES System Ivete) 5 [proof] Environment l Compose E NO
# counterexample
Figure 1: Formal veriï¬cation procedure. | 1606.08514#4 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 5 | We often want a deterministic decision from a neural network, and this gives rise to our new nonlinearity. The non- linearity is the expected transformation of the stochastic regularizer on an input x, which is Φ(x) à Ix + (1 â Φ(x)) à 0x = xΦ(x). Loosely, this expression states that we scale x by how much greater it is than other inputs. Since the cumulative distribution function of a Gaussian is often computed with the error function, we define the Gaussian Error Linear Unit (GELU) as
GELU(2) = «P(X <2) =28(x) =2- [1 + erf(x/V2)] : NleR
We can approximate the GELU with
0.5a(1 + tanh[\/2/m(a + 0.0447152°°)])
# or
xÏ(1.702x),
if greater feedforward speed is worth the cost of exactness. | 1606.08415#5 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 5 | Property
co) YES System Ivete) 5 [proof] Environment l Compose E NO
# counterexample
Figure 1: Formal veriï¬cation procedure.
formal methods: any technique that uses some aspect of formal speciï¬cation, or veriï¬cation, or synthesis, is included. For instance, we include simulation-based hardware veriï¬cation methods or model-based testing methods for software since they use formal speciï¬cations or models to guide the process of simulation or testing.
In order to apply formal veriï¬cation to AI-based systems, at a minimum, one must be able to represent the three inputs S, E and Φ in formalisms for which (ideally) there exist efï¬cient decision procedures to answer the YES/NO question as described above. However, as we describe in Sec. 3, even constructing good representations of the three inputs is not straightforward, let alone dealing with the complexity of the underlying decision problems and associated design issues.
We will illustrate the ideas in this paper with examples from the domain of (semi-)autonomous driving. Fig 2 shows an illustrative example of an AI-based system: a closed-loop cyber-physical system comprising | 1606.08514#5 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 6 | # or
xÏ(1.702x),
if greater feedforward speed is worth the cost of exactness.
We could use different CDFs. For example we could use Logistic Distribution CDF Ï(x) to get what we call the Sigmoid Linear Unit (SiLU) xÏ(x). We could use the CDF of N (µ, Ï2) and have µ and Ï be learnable hyperparameters, but throughout this work we simply let µ = 0 and Ï = 1. Consequently, we do not introduce any new hyperparameters in the following experiments. In the next section, we show that the GELU exceeds ReLUs and ELUs across numerous tasks.
# 3 GELU EXPERIMENTS | 1606.08415#6 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 6 | 1The ï¬rst version of this paper was published in July 2016 in response to the call for white papers for the CMU Exploratory Workshop on Safety and Control for AI held in June 2016, and a second version in October 2017. This is the latest version reï¬ecting the evolution of the authorsâ view of the challenges and approaches for Veriï¬ed AI.
2
a semi-autonomous vehicle with machine learning components along with its environment. Speciï¬cally, assume that the semi-autonomous âego vehicleâ has an automated emergency braking system (AEBS) that attempts to detect and classify objects in front of it and actuate the brakes when needed to avert a collision. Figure 2 shows the AEBS as a system composed of a controller (automatic braking), a plant (vehicle sub- system under control including other parts of the autonomy stack), and a sensor (camera) along with a perception component implemented using a deep neural network. The AEBS, when combined with the vehicleâs environment, forms a closed loop cyber-physical system. The controller regulates the acceleration and braking of the plant using the velocity of the ego vehicle and the distance between it and an obstacle. The environment of the ego vehicle comprises both agents and objects outside the vehicle (other vehicles, | 1606.08514#6 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 7 | # 3 GELU EXPERIMENTS
We evaluate the GELU, ELU, and ReLU on MNIST classification (grayscale images with 10 classes, 60k training examples and 10k test examples), MNIST autoencoding, Tweet part-of-speech tagging (1000 training, 327 validation, and 500 testing tweets), TIMIT frame recognition (3696 training, 1152 validation, and 192 test audio sentences), and CIFAR-10/100 classification (color images with 10/100 classes, 50k training and 10k test examples). We do not evaluate nonlinearities like the LReLU because of its similarity to ReLUs (see Maas et al. (2013) for a description of LReLUs).
# 3.1 MNIST CLASSIFICATION
Let us verify that this nonlinearity competes with previous activation functions by replicating an experiment from Clevert et al. (2016). To this end, we train a fully connected neural network with GELUs (µ = 0, Ï = 1), ReLUs, and ELUs (α = 1). Each 8-layer, 128 neuron wide neural network is trained for 50 epochs with a batch size of 128. This experiment differs from those of
2
os 044 0s) Log Loss (no dropout) Log Loss (dropout keep rate oa GeLU eLU Rel Epoch | 1606.08415#7 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 7 | âââ Environment Sensor Input Ke, : â Controller |__| Plant Learning-Based Perception of closed-loop cyber-physical system with machine learning components (introduced objects, etc.) as well as inside the vehicle (e.g., a driver). A safety requirement for can be informally characterized as the property of maintaining a safe distance between vehicle and any other agent or object on the road. However, as we will see in Sec. 3, to the specification, modeling, and verification of a system such as this one. for Verified AI major challenges to achieving formally-verified AI-based systems, described in more
# Figure 2: in [22]).
# Example
pedestrians, road closed loop system the moving ego
are many nuances
# 3 Challenges for Veriï¬ed AI
We identify ï¬ve major challenges to achieving formally-veriï¬ed AI-based systems, described in more detail below.
# 3.1 Environment Modeling | 1606.08514#7 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 8 | 2
os 044 0s) Log Loss (no dropout) Log Loss (dropout keep rate oa GeLU eLU Rel Epoch
Figure 2: MNIST Classification Results. Left are the loss curves without dropout, and right are curves with a dropout rate of 0.5. Each curve is the the median of five runs. Training set log losses are the darker, lower curves, and the fainter, upper curves are the validation set log loss curves.
10 25 â GELU ââ GELU â EW â ELU 09 ââ RelU 20) â ReLU 08 a 3 07 S15 g ry 206 3 B 9 305 FF fo 04 5 03 0.2 oO 0.0 05 1.0 15 2.0 25 3.0 0.0 Os 1.0 15 2.0 25 3.0 Noise Strength Noise Strength
© £ 3
# a fo
Figure 3: MNIST Robustness Results. Using different nonlinearities, we record the test set accuracy decline and log loss increase as inputs are noised. The MNIST classifier trained without dropout received inputs with uniform noise Unif[âa, a] added to each example at different levels a, where a = 3 is the greatest noise strength. Here GELUs display robustness matching or exceeding ELUs and ReLUs. | 1606.08415#8 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 8 | We identify ï¬ve major challenges to achieving formally-veriï¬ed AI-based systems, described in more detail below.
# 3.1 Environment Modeling
The environments in which AI/ML-based systems operate can be very complex, with considerable uncer- tainty even about how many and which agents are in the environment (both human and robotic), let alone about their intentions and behaviors. As an example, consider the difï¬culty in modeling urban trafï¬c envi- ronments in which an autonomous car must operate. Indeed, AI/ML is often introduced into these systems precisely to deal with such complexity and uncertainty! From a formal methods perspective, this makes it very hard to create realistic environment models with respect to which one can perform veriï¬cation or synthesis.
We see the main challenges for environment modeling as being threefold:
⢠Unknown Variables: In the traditional success stories for formal veriï¬cation, such as verifying cache coherence protocols or device drivers, the interface between the system S and its environment E is well- deï¬ned. The environment can only inï¬uence the system through this interface. However, for AI-based systems, such as an autonomous vehicle example of Sec. 2, it may be impossible to precisely deï¬ne all the variables (features) of the environment. Even in restricted scenarios where the environment variables | 1606.08514#8 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 9 | Clevert et al. in that we use the Adam optimizer (Kingma & Ba, 2015) rather than stochastic gra- dient descent without momentum, and we also show how well nonlinearities cope with dropout. Weights are initialized with unit norm rows, as this has positive impact on each nonlinearityâs per- formance (Hendrycks & Gimpel, 2016; Mishkin & Matas, 2016; Saxe et al., 2014). Note that we tune over the learning rates {10â3, 10â4, 10â5} with 5k validation examples from the training set and take the median results for five runs. Using these classifiers, we demonstrate in Figure 3 that classifiers using a GELU can be more robust to noised inputs. Figure 2 shows that the GELU tends to have the lowest median training log loss with and without dropout. Consequently, although the GELU is inspired by a different stochastic process, it comports well with dropout.
# 3.2 MNIST AUTOENCODER | 1606.08415#9 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08415 | 10 | # 3.2 MNIST AUTOENCODER
We now consider a self-supervised setting and train a deep autoencoder on MNIST (Desjardins et al., 2015). To accomplish this, we use a network with layers of width 1000, 500, 250, 30, 250, 500, 1000, in that order. We again use the Adam optimizer and a batch size of 64. Our loss is the mean squared loss. We vary the learning rate from 10â3 to 10â4. We also tried a learning rate of 0.01 but ELUs diverged, and GELUs and RELUs converged poorly. The results in Figure 4 indicate the GELU accommodates different learning rates and significantly outperforms the other nonlinearities.
3
0.016 0.016 â ce â ce â ew â au 0.014 â Rel 0.014 â Rely 3) 3 0.012 0.012 =1e = 1e-4) 0.010 0.010 0.008 0.008 Reconstruction Error (Ir Reconstruction Error (Ir 0.006 0.006 0.004 0.004 Epoch Epoch
Figure 4: MNIST Autoencoding Results. Each curve is the median of three runs. Left are loss curves for a learning rate of 10â3, and the right figure is for a 10â4 learning rate. Light, thin curves correspond to test set log losses. | 1606.08415#10 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 10 | ⢠Modeling with the Right Fidelity: In traditional uses of formal veriï¬cation, it is usually acceptable to model the environment as a non-deterministic process subject to constraints speciï¬ed in a suitable logic or automata-based formalism. Typically such an environment model is termed as being âover-approximateâ, meaning that it may include (many) more environment behaviors than are possible. Over-approximate environment modeling permits one to perform sound veriï¬cation without a detailed environment model, which can be inefï¬cient to reason with and hard to obtain. However, for AI-based autonomy, purely non-deterministic modeling is likely to produce highly over-approximate models, which in turn yields too many spurious bug reports, rendering the veriï¬cation process useless in practice. Moreover, many AI-based systems make distributional assumptions on the environment, thus requiring the need for prob- abilistic modeling; however, it can be difï¬cult to exactly ascertain the underlying distributions. One can address this by learning a probabilistic model from data, but in this case it is important to remember that the model parameters (e.g., | 1606.08514#10 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 11 | 1.8 1.74 1.64 Log Loss 1.44 1.34 Epoch
Figure 5: TIMIT Frame Classification. Learning curves show training set convergence, and the lighter curves show the validation set convergence.
3.3 TWITTER POS TAGGING
Many datasets in natural language processing are relatively small, so it is important that an activation generalize well from few examples. To meet this challenge we compare the nonlinearities on POS- annotated tweets (Gimpel et al., 2011; Owoputi et al., 2013) which contain 25 tags. The tweet tagger is simply a two-layer network with pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al., 2013). The input is the concatenation of the vector of the word to be tagged and those of its left and right neighboring words. Each layer has 256 neurons, a dropout keep probability of 0.8, and the network is optimized with Adam while tuning over the learning rates {10â3, 10â4, 10â5}. We train each network five times per learning rate, and the median test set error is 12.57% for the GELU, 12.67% for the ReLU, and 12.91% for the ELU.
3.4 TIMIT FRAME CLASSIFICATION | 1606.08415#11 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 11 | underlying distributions. One can address this by learning a probabilistic model from data, but in this case it is important to remember that the model parameters (e.g., transition probabilities) are only estimates, not precise representations of en- vironment behavior. Thus, veriï¬cation algorithms cannot consider the resulting probabilistic model to be âperfectâ; we need to represent uncertainty in the model itself. | 1606.08514#11 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 12 | 3.4 TIMIT FRAME CLASSIFICATION
Our next challenge is phone recognition with the TIMIT dataset which has recordings of 680 speakers in a noiseless environment. The system is a five-layer, 2048-neuron wide classifier as in (Mohamed et al., 2012) with 39 output phone labels and a dropout rate of 0.5 as in (Srivas- tava, 2013). This network takes as input 11 frames and must predict the phone of the center
4
10 34 8 S ob 6 c 2 § 2 44 5 a & oO 27 â GELU ââ ELU â ReLU 0 T T T T i?) 25 50 75 100 125 150 175 200 Epoch
Figure 6: CIFAR-10 Results. Each curve is the median of three runs. Learning curves show training set error rates, and the lighter curves show the test set error rates.
frame using 26 MFCC, energy, and derivative features per frame. We tune over the learning rates {10â3, 10â4, 10â5} and optimize with Adam. After five runs per setting, we obtain the median curves in Figure 5, and median test error chosen at the lowest validation error is 29.3% for the GELU, 29.5% for the ReLU, and 29.6% for the ELU.
# 3.5 CIFAR-10/100 CLASSIFICATION | 1606.08415#12 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 12 | ⢠Modeling Human Behavior: For many AI-based systems, such as semi-autonomous vehicles, human agents are a key part of the environment and/or system. Researchers have attempted modeling humans as non-deterministic or stochastic processes with the goal of verifying the correctness of the overall sys- tem [63, 67]. However, such approaches must deal with the variability and uncertainty in human behavior. One could take a data-driven approach based on machine learning (e.g., [55]), but such an approach is sensitive to the expressivity of the features used by the ML model and the quality of data. In order to achieve Veriï¬ed AI for such human-in-the-loop systems, we need to address the limitations of current human modeling techniques and provide guarantees about their prediction accuracy and convergence. When learned models are used, one must represent any uncertainty in the learned parameters as a ï¬rst- class entity in the model, and take that into account in veriï¬cation and control.
The ï¬rst challenge, then, is to come up with a systematic method of environment modeling that allows one to provide provable guarantees on the systemâs behavior even when there is considerable uncertainty about the environment.
# 3.2 Formal Speciï¬cation | 1606.08514#12 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 13 | # 3.5 CIFAR-10/100 CLASSIFICATION
Next, we demonstrate that for more intricate architectures the GELU nonlinearity again outperforms other nonlinearities. We evaluate this activation function using CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) on shallow and deep convolutional neural networks, respectively.
Our shallower convolutional neural network is a 9-layer network with the architecture and training procedure from Salimans & Kingma (2016) while using batch normalization to speed up training. The architecture is described in appendix A and recently obtained state of the art on CIFAR-10 without data augmentation. No data augmentation was used to train this network. We tune over the learning initial rates {10â3, 10â4, 10â5} with 5k validation examples then train on the whole training set again based upon the learning rate from cross validation. The network is optimized with Adam for 200 epochs, and at the 100th epoch the learning rate linearly decays to zero. Results are shown in Figure 6, and each curve is a median of three runs. Ultimately, the GELU obtains a median error rate of 7.89%, the ReLU obtains 8.16%, and the ELU obtains 8.41%. | 1606.08415#13 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 13 | Formal veriï¬cation critically relies on having a formal speciï¬cation â a precise, mathematical statement of what the system is supposed to do. However, the challenge of coming up with a high-quality formal speciï¬cation is well known, even in application domains in which formal veriï¬cation has found considerable success (see, e.g., [7]). This challenge is only exacerbated in AI-based systems. We identify three major problems. Speciï¬cation for Hard-to-Formalize Tasks: Consider the perception module in the AEBS controller of Fig. 2 which must detect and classify objects, distinguishing vehicles and pedestrians from other objects. Correct- ness for this module in the classic formal methods sense requires a formal deï¬nition of each type of road user, which is extremely difï¬cult, if not impossible. Similar problems arise for other tasks involving per- ception and communication, such as natural language processing. How then, do we specify correctness properties for such a module? What should the speciï¬cation language be and what tools can one use to construct a speciï¬cation? Quantitative vs. Boolean Speciï¬cations: | 1606.08514#13 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 14 | Next we consider a wide residual network on CIFAR-100 with 40 layers and a widening factor of 4 (Zagoruyko & Komodakis, 2016). We train for 50 epochs with the learning rate schedule described in (Loshchilov & Hutter, 2016) (T0 = 50, η = 0.1) with Nesterov momentum, and with a dropout keep probability of 0.7. Some have noted that ELUs have an exploding gradient with residual networks (Shah et al., 2016), and this is alleviated with batch normalization at the end of a residual block. Consequently, we use a Conv-Activation-Conv-Activation-BatchNorm block architecture to be charitable to ELUs. Over three runs we obtain the median convergence curves in Figure 7. Meanwhile, the GELU achieves a median error of 20.74%, the ReLU obtains 21.77% (without our changes described above, the original 40-4 WideResNet with a ReLU obtains 22.89% (Zagoruyko & Komodakis, 2016)), and the ELU obtains 22.98%.
# 4 DISCUSSION | 1606.08415#14 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08415 | 15 | # 4 DISCUSSION
Across several experiments, the GELU outperformed previous nonlinearities, but it bears semblance to the ReLU and ELU in other respects. For example, as Ï â 0 and if µ = 0, the GELU becomes a ReLU. More, the ReLU and GELU are equal asymptotically. In fact, the GELU can be viewed as a way to smooth a ReLU. To see this, recall that ReLU = max(x, 0) = x1(x > 0) (where
5
3.0 â GELU â ELU 254 â ReLU » 2.04 a co} s Dn ce} 7154 1.04 o5+ ; r + ; ; 0 10 20 30 40 50 Epoch
Figure 7: CIFAR-100 Wide Residual Network Results. Learning curves show training set conver- gence with dropout on, and the lighter curves show the test set convergence with dropout off. | 1606.08415#15 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 15 | 4
functions specifying costs or rewards. Moreover, there can be multiple objectives, some of which must be satisï¬ed together, and others that may need to be traded off against each other in certain environments. What are the best ways to unify Boolean and quantitative approaches to speciï¬cation? Are there formalisms that can capture commonly discussed properties of AI components such as robustness and fairness in a uniï¬ed manner? Data vs. Formal Requirements: The view of âdata as speciï¬cationâ is common in machine learning. Labeled âground truthâ data is often the only speciï¬cation of correct behavior. On the other hand, a speciï¬cation in formal methods is a mathematical property that deï¬nes the set of correct behaviors. How can we bridge this gap?
Thus, the second challenge is to design effective methods to specify desired and undesired properties of systems that use AI- or ML-based components.
# 3.3 Modeling Learning Systems
In most traditional applications of formal veriï¬cation, the system S is precisely known: it is a program or a circuit described in a programming language or hardware description language. The system modeling problem is primarily concerned with reducing the size of S to a more tractable one by abstracting away irrelevant details. | 1606.08514#15 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 16 | 1 is the indicator function), while the GELU is xΦ(x) if µ = 0, Ï = 1. Then the CDF is a smooth approximation to the binary function the ReLU uses, like how the sigmoid smoothed binary threshold activations. Unlike the ReLU, the GELU and ELU can be both negative and positive. In fact, if we used the cumulative distribution function of the standard Cauchy distribution, then the ELU (when α = 1/Ï) is asymptotically equal to xP (C ⤠x), C â¼ Cauchy(0, 1) for negative values and for positive values is xP (C ⤠x) if we shift the line down by 1/Ï. These are some fundamental relations to previous nonlinearities. | 1606.08415#16 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 16 | AI-based systems lead to a very different challenge for system modeling, primarily stemming from the
use of machine learning: ⢠Very high-dimensional input space: ML components used for perception usually operate over very high- dimensional input spaces. For the illustrative example of Sec. 2 from [22], each input RGB image is of dimension 1000 à 600 pixels, contains 2561000Ã600Ã3 elements, and in general the input is a stream of such high-dimensional vectors. Although formal methods has been used for high-dimensional input spaces (e.g., in digital circuits), the nature of the input spaces for ML-based perception is different â not entirely Boolean, but hybrid, including both discrete and continuous variables.
⢠Very high-dimensional parameter/state space: ML components such as deep neural networks have any- where from thousands to millions of model parameters and primitive components. For example, state- of-the-art DNNs used by the authors in instantiations of the example of Fig. 2 have up to 60 million parameters and tens of layers. This gives rise to a huge search space for veriï¬cation that requires careful abstraction. | 1606.08514#16 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 17 | However, the GELU has several notable differences. This non-convex, non-monotonic function is not linear in the positive domain and exhibits curvature at all points. Meanwhile ReLUs and ELUs, which are convex and monotonic activations, are linear in the positive domain and thereby can lack curvature. As such, increased curvature and non-monotonicity may allow GELUs to more easily approximate complicated functions than can ReLUs or ELUs. Also, since ReLU(x) = x1(x > 0) and GELU(x) = xΦ(x) if µ = 0, Ï = 1, we can see that the ReLU gates the input depending upon its sign, while the GELU weights its input depending upon how much greater it is than other inputs. In addition and significantly, the GELU has a probabilistic interpretation given that it is the expectation of a stochastic regularizer. | 1606.08415#17 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 17 | ⢠Online adaptation and evolution: Some learning systems, such as a robot using reinforcement learning, evolve as they encounter new data and situations. For such systems, design-time veriï¬cation must either account for future changes in the behavior of the system, or else be performed incrementally and online as the learning system evolves.
⢠Modeling systems in context: For many AI/ML components, their speciï¬cation is only deï¬ned by the context. For example, verifying robustness of a DNN such as the one in Fig. 2 requires us to capture a model of the surrounding system. We need techniques to model ML components along with their context so that semantically meaningful properties can be veriï¬ed.
# 3.4 Efï¬cient and Scalable Design and Veriï¬cation of Models and Data | 1606.08514#17 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
1606.08415 | 18 | We also have two practical tips for using the GELU. First we advise using an optimizer with mo- mentum when training with a GELU, as is standard for deep neural networks. Second, using a close approximation to the cumulative distribution function of a Gaussian distribution is impor- tant. A sigmoid function o(z) = 1/(1 + e~*) is an approximation of a cumulative distribu- tion function of a normal distribution. However, we found that a Sigmoid Linear Unit (SiLU) xo(x) performs worse than GELUs but usually better than ReLUs and ELUs, so our SiLU is also a reasonable nonlinearity choice. Instead of using a xo(x) to approximate ®(x), we used pov 0.5x(1 + tanh[\/2/m(2x + 0.04471523)]) (Choudhury or xo(1.702). Both are sufficiently fast, easy-to-implement approximations, and we used the former in every experiment in this paper.
# 5 CONCLUSION
For the numerous datasets evaluated in this paper, the GELU exceeded the accuracy of the ELU and ReLU consistently, making it a viable alternative to previous nonlinearities.
1Thank you to Dmytro Mishkin for bringing an approximation like this to our attention.
6 | 1606.08415#18 | Gaussian Error Linear Units (GELUs) | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks. | http://arxiv.org/pdf/1606.08415 | Dan Hendrycks, Kevin Gimpel | cs.LG | Trimmed version of 2016 draft | null | cs.LG | 20160627 | 20230606 | [] |
1606.08514 | 18 | # 3.4 Efï¬cient and Scalable Design and Veriï¬cation of Models and Data
The effectiveness of formal methods in the domains of hardware and software has been driven by advances in underlying âcomputational enginesâ â e.g., SAT, SMT, numerical simulation, and model checking. Given the scale of AI/ML systems, the complexity of their environments, and the new types of speciï¬cations involved, several advances are needed in creating computational engines for efï¬cient and scalable training, testing, design, and veriï¬cation of AI-based systems. We identify here the key challenges that must be overcome in order to achieve these advances.
5
Data Generation: Data is the fundamental starting point for machine learning. Any quest to improve the quality of a machine learning system must improve the quality of the data it learns from. Can formal methods help to systematically select, design and augment the data used for machine learning? | 1606.08514#18 | Towards Verified Artificial Intelligence | Verified artificial intelligence (AI) is the goal of designing AI-based
systems that that have strong, ideally provable, assurances of correctness with
respect to mathematically-specified requirements. This paper considers Verified
AI from a formal methods perspective. We describe five challenges for achieving
Verified AI, and five corresponding principles for addressing these challenges. | http://arxiv.org/pdf/1606.08514 | Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry | cs.AI | null | null | cs.AI | 20160627 | 20200723 | [
{
"id": "1606.06565"
},
{
"id": "1801.05927"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.