doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.09106 | 65 | 'In practice, all eight weight matrices are concatenated into one large matrix for computational efficiency.
19
This dropout operation is generally only applied inside the main LSTM, not in the smaller HyperL- STM cell. For larger size systems we can apply dropout to both networks.
A.2.3. IMPLEMENTATION DETAILS AND WEIGHT INITIALIZATION FOR HYPERLSTM
This section may be useful to readers who may want to implement their own version of the Hyper- LSTM Cell, as we will discuss initialization of the parameters for Equations 10 to 13. We recom- mend implementing the HyperLSTM within the same interface as a normal recurrent network cell so that using the HyperLSTM will not be any different than using a normal RNN. These initial- ization parameters have been found to work well with our experiments, but they may be far from optimal depending on the task at hand. A reference implementation developed using the Tensor- Flow (Abadi et al., 2016) framework can be found at http: //blog.otoro.net/2016/09/ 28/hyper-networks/. | 1609.09106#65 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 66 | 7. REFERENCES [1] Freebase: A community-curated database of well-known people, places, and things. https://www.freebase.com. [2] Google I/O 2013 - semantic video annotations in the Youtube Topics API: Theory and applications. https://www.youtube.com/watch?v=wf_77z1H-vQ.
[3] Knowledge Graph Search API. https://developers.google.com/knowledge-graph/.
[4] Tensorï¬ow: Image recognition.
https://www.tensorï¬ow.org/tutorials/image_recognition. [5] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri.
5] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In Proceedings of the International Conference on Computer Vision (ICCV), 2005. | 1609.08675#66 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 66 | Tl ie HyperLSTM Cell will be located inside the HyperLSTM, as described in Equation 10. It is a normal LSTM cell with Layer Normalization. The inputs to the HyperLSTM Cell will be the con- catenation of the input signal and the hidden units of the main LSTM cell. The biases in Equation 10 are initialized to zero and Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
The embedding vectors are produced by the HyperLSTM Cell at each timestep by linear projection described in Equation 11. The weights for the first two equations are initialized to be zero, and the biases are initialized to one. The weights for the third equation are initialized to be a small normal random variable with standard deviation of 0.01. | 1609.09106#66 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 67 | Actions as space-time shapes. In Proceedings of the International Conference on Computer Vision (ICCV), 2005. [6] J. Deng, W. Dong, R. Socher, L. jia Li, K. Li, and L. Fei-fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[7] M. Everingham, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge, 2009.
[8] L. Fei-fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 2006.
[9] R. Girshick. Fast R-CNN. In Proceedings of the International Conference on Computer Vision (ICCV), 2015.
[10] G. Grifï¬n, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. | 1609.08675#67 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 67 | The weight scaling vectors that modify the weight matrices are generated from these embedding vectors, as per Equation 12. Orthogonal initialization is applied to the W), and W,,, while bo is initialized to zero. W,, is also initialized to zero. For the weight scaling vectors, we used a method described in Recurrent Batch Normalization (Cooijmans et al., 2016) where the scaling vectors are initialized to 0.1 rather than 1.0 and this has shown to help gradient flow. Therefore, for weight matrices W;,. and W,,., we initialize to a constant value of 0.1/N, to maintain this property.
The only place we use dropout is in the single location in Equation 13, developed in Recurrent Dropout without Memory Loss (Semeniuta et al., 2016). We can use this dropout gate like any other normal dropout gate in a feed-forward network.
A.3 EXPERIMENT SETUP DETAILS AND HYPER PARAMETERS
A.3.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST | 1609.09106#67 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 68 | [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [12] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961â970, 2015.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computing, 9(8), Nov. 1997.
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning (ICML), pages 448â456, 2015.
[15] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell., 34(9), Sept. 2012. | 1609.08675#68 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 68 | A.3.1 USING STATIC HYPERNETWORKS TO GENERATE FILTERS FOR CONVOLUTIONAL NETWORKS AND MNIST
We train the network with a 55000 / 5000 / 10000 split for the training, validation and test sets and use the 5000 validation samples for early stopping, and train the network using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 on mini-batches of size 1000. To decrease over fitting, we pad MNIST training images to 30x30 pixels and random crop to 28x28.!
Model Test Error Params of 2"' Kernel Normal Convnet 0.72% 12,544 Hyper Convnet 0.76% 4,244
Table 7: MNIST Classification with hypernetwork generated weights.
A.3.2 STATIC HYPERNETWORKS FOR RESIDUAL NETWORK ARCHITECTURE AND CIFAR-10
We train both the normal residual network and the hypernetwork version using a 45000 / 5000 / 10000 split for training, validation, and test set. The 5000 validation samples are randomly chosen and isolated from the original 50000 training samples. We train the entire setup with a mini-batch
âAn [Python notebook demonstrating the MNIST Hypernetwork experiment is available at this website: http://blog.otoro.net/2016/09/28/hyper-networks/.
20 | 1609.09106#68 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 69 | [16] Y. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/THUMOS14, 2014.
[17] Y.-G. Jiang, Z. Wu, J. Wang, X. Xue, and S.-F. Chang. Exploiting feature and class relationships in video categorization with regularized deep neural networks. arXiv preprint arXiv:1502.07209, 2015.
[18] M. I. Jordan. Hierarchical mixtures of experts and the em algorithm. Neural Computation, 6, 1994.
[19] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1725â1732, Columbus, Ohio, USA, 2014. | 1609.08675#69 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 69 | âAn [Python notebook demonstrating the MNIST Hypernetwork experiment is available at this website: http://blog.otoro.net/2016/09/28/hyper-networks/.
20
size of 128 using Nesterov Momentum SGD for the normal version and Adam for the hypernetwork version, both with a learning rate schedule. We apply L2 regularization on the kernel weights, and also on the hypernetwork-generated kernel weights of 0.0005%. To decrease over fitting, we apply light data augmentation pad training images to 36x36 pixels and random crop to 32x32, and perform random horizontal flips.
Table 8: Learning Rate Schedule for Nesterov Momentum SGD
<step learning rate 28,000 0.10000 56,000 0.02000 84,000 0.00400 112,000 0.00080 140,000 0.00016
Table 9: Learning Rate Schedule for Hyper Network / Adam
<step learning rate 168,000 0.00200 336,000 0.00100 504,000 0.00020 672,000 0.00005
A.3.3 CHARACTER-LEVEL PENN TREEBANK
The hyper-parameters of all the experiments were selected through non-extensive grid search on the validation set. Whenever possible, we used reported learning rates and batch sizes in the literature that had been used for similar experiments performed in the past. | 1609.09106#69 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 70 | [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097â1105, 2012.
[21] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV), 2011.
[22] I. Laptev and T. Lindeberg. Space-time interest points. In Proceedings of the International Conference on Computer Vision (ICCV), 2003.
[23] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
[24] S. Ma, S. A. Bargal, J. Zhang, L. Sigal, and S. Sclaroff. Do less and achieve more: Training cnns for action recognition utilizing action images from the web. CoRR, abs/1512.07155, 2015. | 1609.08675#70 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 70 | For Character-level Penn Treebank, we use mini-batches of size 128, to train on sequences of length 100. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.001 and gra- dient clipping of 1.0. During evaluation, we generate the entire sequence, and do not use information about previous test errors for prediction, e.g., dynamic evaluation (Graves, 2013; Rocki, 2016b). As mentioned earlier, we apply dropout to the input and output layers, and also apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
We also experimented with a version of the model using a larger embedding size of 16, and also with a lower dropout keep probability of 85%, and reported results with this âLarge Embedding" model in Table 3. Lastly, we stacked two layers of this âLarge Embedding" model together to measure the benefits of a multi-layer version of HyperLSTM, with a dropout keep probability of 80%.
# A.3.4 HUTTER PRIZE WIKIPEDIA | 1609.09106#70 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 71 | [25] V. Mnih and G. Hinton. Learning to label aerial images from noisy data. In Proceedings of the 29th Annual International Conference on Machine Learning (ICML), June 2012.
[26] J. Y.-H. Ng, M. J. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classiï¬cation. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), pages 4694â4702, 2015.
[27] F. Perronnin and C. Dance. Fisher kernels on visual
vocabularies for image categorization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007. [28] A. Quattoni and A. Torralba. Recognizing indoor scenes. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
[29] S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and | 1609.08675#71 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 71 | # A.3.4 HUTTER PRIZE WIKIPEDIA
As enwik8 is a bigger dataset compared to Penn Treebank, we will use 1800 units for our networks. In addition, we perform training on sequences of length 250. Our normal HyperLSTM Cell consists of 256 units, and we use an embedding size of 64.
Our setup is similar in the previous experiment, using the same mini-batch size, learning rate, weight initialization, gradient clipping parameters and optimizer. We do not use dropout for the input and output layers, but still apply recurrent dropout with a keep probability of 90%. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
As in (Chung et al., 2015), we train on the first 90M characters of the dataset, use the next 5M as a validation set for early stopping, and the last 5M characters as the test set.
In this experiment, we also experimented with a slightly larger version of HyperLSTM with 2048 hidden units. This version of of the model uses 2048 hidden units for the main network, inline with similar models for this experiment in other works. In addition, its HyperLSTM Cell consists of 512
21 | 1609.09106#71 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 72 | [29] S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and
A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. ArXiv e-prints, Dec. 2014. [30] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[31] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations (ICLR). [32] J. Shotton, J. Winn, C. Rother, and A. Criminisi.
Textonboost: Joint appearance, shape and context modeling for multi-class object. In Proceedings of the European Conference on Computer Vision (ECCV), 2006. | 1609.08675#72 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 72 | 21
units with an embedding size of 64. Given the larger number of nodes in both the main LSTM and HyperLSTM cell, recurrent dropout is also applied to the HyperLSTM Cell of this model, where we use a lower dropout keep probability of 85%, and train on an increased sequence length of 300.
# A.3.5 HANDWRITING SEQUENCE GENERATION
We will use the same model architecture described in (Graves, 2013) and use a Mixture Density Network layer (Bishop, 1994) to generate a mixture of bi-variate Gaussian distributions to model at each time step to model the pen location. We normalize the data and use the same train/validation split as per (Graves, 2013) in this experiment. We remove samples less than length 300 as we found these samples contain a lot of recording errors and noise. After the pre-processing, as the dataset is small, we introduce data augmentation of chosen uniformly from +/- 10% and apply a this random scaling a the samples used for training. | 1609.09106#72 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 73 | Textonboost: Joint appearance, shape and context modeling for multi-class object. In Proceedings of the European Conference on Computer Vision (ECCV), 2006.
[33] K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. In CRCV-TR-12-01, 2012.
[34] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L. Li. The new data and new challenges in multimedia research. CoRR, abs/1503.01817, 2015.
[35] D. Tran, L. D. Bourdev, R. Fergus, L. Torresani, and M. Paluri. C3D: generic features for video analysis. CoRR, abs/1412.0767, 2014.
[36] H. Wang, M. M. Ullah, A. Kläser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In Proc. BMVC, 2009. | 1609.08675#73 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 73 | One concern we want to address is the lack of a test set in the data split methodology devised in (Graves, 2013). In this task, qualitative assessment of generated handwriting samples is arguably just as important as the quantitative log likelihood score of the results. Due to the small size of the dataset, we want to use as large as possible the portion of the dataset to train our models in order to generate better quality handwriting samples so we can also judge our models qualitatively in addition to just examining the log-loss numbers, so for this task we will use the same training / validation split as (Graves, 2013), with a caveat that we may be somewhat over fitting to the validation set in the quantitative results. In future works, we will explore using larger datasets to conduct a more rigorous quantitative analysis.
For model training, will apply recurrent dropout and also dropout to the output layer with a keep probability of 0.95. The model is trained on mini-batches of size 32 containing sequences of variable length. We trained the model using Adam (Kingma & Ba, 2015) with a learning rate of 0.0001 and gradient clipping of 5.0. Our HyperLSTM Cell consists of 128 units and a signal size of 4. For baseline models, Orthogonal Initialization (Henaff et al., 2016) is performed for all weights.
# A.3.6 NEURAL MACHINE TRANSLATION | 1609.09106#73 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08675 | 74 | [37] S. Wiesler, A. Richard, R. Schlüter, and H. Ney. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2014, Florence, Italy, May 4-9, 2014, pages 180â184. IEEE, 2014.
[38] J. Xiao, K. A. Ehinger, J. Hays, A. Torralba, A. Oliva, and J. Xiao. Sun database: Exploring a large collection of scene categories, 2013.
[39] Z. Xu, Y. Yang, and A. G. Hauptmann. A discriminative cnn video representation for event detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[40] H.-F. Yu, P. Jain, P. Kar, and I. Dhillon. Large-scale multi-label learning with missing labels. In Proceedings of The 31st International Conference on Machine Learning (ICML), pages 593â601, 2014.
[41] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013. | 1609.08675#74 | YouTube-8M: A Large-Scale Video Classification Benchmark | Many recent advancements in Computer Vision are attributed to large datasets.
Open-source software packages for Machine Learning and inexpensive commodity
hardware have reduced the barrier of entry for exploring novel approaches at
scale. It is possible to train models over millions of examples within a few
days. Although large-scale datasets exist for image understanding, such as
ImageNet, there are no comparable size video classification datasets.
In this paper, we introduce YouTube-8M, the largest multi-label video
classification dataset, composed of ~8 million videos (500K hours of video),
annotated with a vocabulary of 4800 visual entities. To get the videos and
their labels, we used a YouTube video annotation system, which labels videos
with their main topics. While the labels are machine-generated, they have
high-precision and are derived from a variety of human-based signals including
metadata and query click signals. We filtered the video labels (Knowledge Graph
entities) using both automated and manual curation strategies, including asking
human raters if the labels are visually recognizable. Then, we decoded each
video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to
extract the hidden representation immediately prior to the classification
layer. Finally, we compressed the frame features and make both the features and
video-level labels available for download.
We trained various (modest) classification models on the dataset, evaluated
them using popular evaluation metrics, and report them as baselines. Despite
the size of the dataset, some of our models train to convergence in less than a
day on a single machine using TensorFlow. We plan to release code for training
a TensorFlow model and for computing metrics. | http://arxiv.org/pdf/1609.08675 | Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan | cs.CV | 10 pages | null | cs.CV | 20160927 | 20160927 | [
{
"id": "1502.07209"
}
] |
1609.09106 | 74 | # A.3.6 NEURAL MACHINE TRANSLATION
Our experimental procedure follows the procedure outlined in Sections 8.1 to 8.4 of the GNMT paper (Wu et al., 2016). We only performed experiments with a single model and did not conduct experiments with Reinforcement Learning or Model Ensembles as described in Sections 8.5 and 8.6 of the GNMT paper.
The GNMT paper outlines several methods for the training procedure, and investigated several ap- proaches including combining Adam and SGD optimization methods, in addition to weight quanti- zation schemes. In our experiment, we used only the Adam (Kingma & Ba, 2015) optimizer with the same hyperparameters described in the GNMT paper. We did not employ any quantization schemes.
We replaced LSTM cells in the GNMT WPM-32K architecture, with LayerNorm HyperLSTM cells with the same number of hidden units. In this experiment, our HyperLSTM Cell consists of 128 units with an embedding size of 32.
22
A.4_ EXAMPLES OF GENERATED WIKIPEDIA TEXT
The eastern half of Russia varies from Modern to Central Europe. Due to similar lighting and the extent of the combination of long tributaries to the [[Gulf of Boston]], it is more of a private warehouse than the [[Austro-Hungarian Orthodox Christian and Soviet Union]]. | 1609.09106#74 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 75 | ==Demographic data base==
# controversial
# ââAustrian
# Spellingââ]]
[[Image:Auschwitz map.png|frame|The [[Image:Czech Middle East SSR chief state 103.JPG|thumb|Serbian Russia movement]] [[1593]]&ndash;[[1719]], and set up a law of [[ parliamentary sovereignty]] and unity in Eastern churches.
In medieval Roman Catholicism Tuba and Spanish controlled it until the reign of Burgundian kings and resulted in many changes in multiculturalism, though the [[Crusades]], usually started following the [[Treaty of Portugal]], shored the title of three major powers only a strong part. | 1609.09106#75 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 76 | [[French Marines]] (prompting a huge change in [[President of the Council of the Empire]], only after about [[1793]], the Protestant church, fled to the perspective of his heroic declaration of government and, in the next fifty years, [[Christianity|Christian]] and [[Jutland]]. Books combined into a well-published work by a single R. (Sch. M. ellipse poem) tradition in St Peter also included 7:1, he dwell upon the apostle, scripture and the latter of Luke; totally unknown, a distinct class of religious congregations that describes in number of [[remor]]an traditions such as the [[Germanic tribes]] (Fridericus or Lichteusen and the Wales). Be introduced back to the [[14th century]], as related in the [[New Testament]] and in its elegant [[ Anglo-Saxon Chronicle]], although they branch off the characteristic traditions which Saint [[Philip of Macedon]] asserted.
Ae also in his native countries.
In [[1692]], Seymour was barged at poverty of young English children, which cost almost the preparation of the marriage to him. | 1609.09106#76 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 77 | Ae also in his native countries.
In [[1692]], Seymour was barged at poverty of young English children, which cost almost the preparation of the marriage to him.
Burkeâs work was a good step for his writing, which was stopped by clergy in the Pacific, where he had both refused and received a position of successor to the throne. Like the other councillors in his will, the elder Reinhold was not in the Duke, and he was virtually non-father of Edward I, in order to recognize [[Henry II of England|Queen Enrie
]]
# of
# Parliament.
The Melchizedek Minister Qut]] signed the [[Soviet Union]], and forced Hoover to provide [[Hoover (disambiguation) |hoover]]s in [[1844]], [[1841]].
His work on social linguistic relations is divided to the several times of polity for educatinnisley is 760 Li Italians. After Zaitiâs death , and he was captured August 3, he witnessed a choice better by public, character, repetitious, punt, and future.
Figure 14: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM
23
== Quatitis==
:/âMain article: [[sexagesimal]]ââ | 1609.09106#77 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 78 | Figure 14: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM
23
== Quatitis==
:/âMain article: [[sexagesimal]]ââ
Sexual intimacy was traditionally performed by a male race of the [[ mitochondria]] of living things. The next geneme is used by ââ Clitoronââ into short forms of [[sexual reproduction]]. When a maternal suffeach-Lashe]] to the myriad of a "masterâs character ". He recognizes the associated reflection of [[force call carriers]], the [[Battle of Pois except fragile house and by historians who have at first incorporated his father.
==Geography==
The island and county top of Guernsey consistently has about a third of its land, centred on the coast subtained by mountain peels with mountains, squares, and lakes that cease to be links with the size and depth of sea level and weave in so close to lowlands. Strategically to the border of the country also at the southeast corner of the province of Denmark do not apply, but sometimes west of dense climates of coastal Austria and west Canada, the Flemish area of the continent actually inhabits [[tropical geographical transition ]] and transitions from [[soil]] to [[snow]] residents.]]
==Definition== | 1609.09106#78 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 79 | ==Definition==
The symbols are ââquotationalââ and âââdistinctâââ or advanced. {{ref| no_1}} Older readings are used for [[phrase]]s, especially, [[ancient Greek]], and [[Latin]] in their development process. Several varieties of permanent systems typically refer to [[primordial pleasure]] (for example, [[Pleistocene]], [[Classical antenni|Ctrum ]]), but its claim is that it holds the size of the coci, but is historically important both for import: brewing and commercial use.
majority of cuisine specifically refers to this period, where the southern countries developed in the 19th century. Scotland had a cultural identity of or now a key church who worked between the 8th and 60th through 6 (so that there are small single authors of detailed recommendations for them and at first) rather than
# A
,
# [[Adoptionism|adoptionists]]
# often
started
# inscribed
# with
appearing the words | 1609.09106#79 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 80 | # A
,
# [[Adoptionism|adoptionists]]
# often
started
# inscribed
# with
appearing the words
distinct from two types. On the group definition the adjective fightingââ is until Crown Violence Association]], in which the higher education [[motto]] (despite the resulting attack on [[medical treatment]]) peaked on [[15 December]], [[2005]]. At 30 percent, up to 50% of the electric music from the period was created by Voltaire, but Newton promoted the history of his life.
'â
Publications in the Greek movie ââ[[The Great Theory of Bertrand Russell J]ââ, also kept an important part into the inclusion of ââ[[The Beast for the Passage of Study]]ââ, began in [[1869]], opposite the existence of racial matters. Many of Maryâs religious faiths ( including the [[Mary Sue Literature]] in the United States) incorporated much of Christianity within Hispanic [[Sacred text]]s. | 1609.09106#80 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 81 | But controversial belief must be traced back to the 1950s stated that their anticolonial forces required the challenge of even lingering wars tossing nomon before leaves the bomb in paint on the South Island, known as [[Quay]], facing [[Britain]], though he still holds to his ancestors a strong ancestor of Orthodoxy. Others explain that the process of reverence occurred from [[Common Hermitage]], when the [[Crusade|Speakers]] laid his lifespan in [[Islam]] into the north of Israel. At the end of the [[14th century BCE]], the citadel of [[ Israel]] set Eisenace itself in the [[Abyssinia]]n islands, which was Faroeâs Dominican Republic claimed by the King.
Figure 15: enwik8 sample generated from 2048-unit Layer Norm HyperLSTM
24
A.5 EXAMPLES OF RANDOMLY CHOSEN GENERATED HANDWRITING SAMPLES
A Yar - Fen h a , . Peob ontrend A Ihe OFS td oceray 2 ehrstalent LOuies
Laerp ol; ybebe web rtlos polorigile Leach Haber cL As iw
Rta wis aim Xe rere Pdp Lescol yg golin rat 2hi5 Chew
odd ⢠Cores boon. ~ Perr ereticllor Coon roles âaan | 1609.09106#81 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 82 | odd ⢠Cores boon. ~ Perr ereticllor Coon roles âaan
RUD, ony Sree ponbiteme BI pes tHDIre &, wile
onlsiScad Oy dfowk, Lp plc hel Co oue y â pt Ha Hae real, lew
4 > Le hic st Who / OF STec > = Co Abus yheeaore athintspscon at roth ret
duer qansORe Wve flow bars tmotante ply ics
couwtk edaris (orrien - Brenmre lancer torengar il fey
Merpurre Heer asch trenoed ah. ene imaylil, Py
BY) once gin) route lerl iy Wk deafore beara runs: Ohngad
glee PEP fopeavasta The 6, ME | Net royterd neg W.Glaar ¢
Figure 16: Handwriting samples generated from LSTM
25 | 1609.09106#82 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 83 | conn! wot Hidtte fan perSye Broa ancighMinwy ok
Ure {rag loa moth y Ab wed jean youn, wclO\Ahwunc:
Nip Waiveis Wielysteresgrn dak Che Sercel an pox Mang
Yio seper Whe bh 22 Aved endhne ron ldo foc ie gears ~elutce ow
Lk cor rhode hevs, isons Pear ouek fest hourmrae ie
Ko. Cre! C0 whand eh Colbed Rome cron exc LP oremip
WOK fo Pteco. AS @@{iSF Woere ) iuel-alceanvere
sevinkbepree?@ Hug rears Sol sealyriu dech pi rel Baleâ
Ae pe gate vey hd we bce Lugey cs, Cope yA le
Ihe 0 boy: fraccusysene â en err So fs, y Sare, \ReS aH
; pecikâs tha bowngred , 12 Idetohseal Su Qaseborr cf fren, can L
ibe J thd foke atc |) woe vcd Wig Wi'nede. Testing ao
Figure 17: Handwriting samples generated from Layer Norm LSTM
26 | 1609.09106#83 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 84 | ibe J thd foke atc |) woe vcd Wig Wi'nede. Testing ao
Figure 17: Handwriting samples generated from Layer Norm LSTM
26
Tahal sion wor Hm iM me, gel yedtica AM Cony Urns
# So the
# lomboe
# ae
# ety theble- sy
fore Hoon aderpebecs lone!
protsusioveriste waby caduetm cul Pol 4 OMâ Sy 4/2n0)
edicesale ed atl ayer Wopanes: foay org BUN ol
wt we wang Hresl Hem coteas Shim melthe- bed fone
C [)igteuiclenhuta pert prone mat Car hos Cred, cl
. MA; Rabo ove dhe ithe woopasaniics 4
(hoon pore in Ko Tho & Wom % Ove, Felcesy yor Mead tha
pew Piugu | lea b eveeledy [, Sous cle [are jth Rebird
Iprb lity. fo
# r
aA) Ved meee & co a5r CIOS rearthe Cv ecQune 3 Eo .
paniter yronhe pins (de by lit Mhorgectdrly tr
Figure 18: Handwriting samples generated from HyperLSTM
27
; | 1609.09106#84 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 85 | paniter yronhe pins (de by lit Mhorgectdrly tr
Figure 18: Handwriting samples generated from HyperLSTM
27
;
A.6 EXAMPLES OF RANDOMLY CHOSEN MACHINE TRANSLATION SAMPLES
We randomly selected translation samples generated from both LSTM baseline and HyperLSTM models from the WMTâ 14 EnâFr Test Set. Given an English phrase, we can compare between the correct French translation, the LSTM translation, and the HyperLSTM translation.
English Input I was expecting to see gnashing of teeth and a fight breaking out at the gate French (Ground Truth) Je mâ attendais a voir des grincements de dents et une bagarre éclater a la porte LSTM Translation Je mâ attendais a voir des larmes de dents et un combat a la porte HyperLSTM Translation Je mâ attendais a voir des dents grincer des dents et une bataille éclater a la porte
English Input
French (Ground Truth)
LSTM Translation
HyperLSTM Translation
English Input
Prosecuting , Anne Whyte said : " If anyone should know not to the break the law , it is a criminal solicitor . "
French (Ground Truth) | 1609.09106#85 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 86 | HyperLSTM Translation
English Input
Prosecuting , Anne Whyte said : " If anyone should know not to the break the law , it is a criminal solicitor . "
French (Ground Truth)
Le procureur Anne Whyte a déclaré : « Si quelquâ savoir quâ il ne faut pas violer la loi , câ est avocat pénaliste . »
LSTM Translation
Prosecuting , Anne Whyte a dit : « Si quelquâ un doit savoir quâ il ne faut pas enfreindre la loi , câ est un solicitor criminel
HyperLSTM Translation
En poursuivant , Anne Whyte a dit : « Si quelquâ un doit savoir ne pas enfreindre la loi , câ est un avocat criminel
# English Input
According to her , the CSRS was invited to a mediation and she asked for an additional period for consideration
French (Ground Truth)
Selon elle , la CSRS a été invitée a une médiation et elle a demandé un délai supplémentaire pour y réfléchir
LSTM Translation
Selon elle , le SCRS a été invité a une médiation et elle a demandé un délai supplémentaire | 1609.09106#86 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 87 | LSTM Translation
Selon elle , le SCRS a été invité a une médiation et elle a demandé un délai supplémentaire
HyperLSTM Translation
Selon elle , le SCRS a été invité a une médiation et elle a demandé une période de réflexion supplémentaire
28
# English Input
Relations between the US and Germany have come under strain following claims that the NSA bugged Chancellor Angela âs Merkel ââs phone
French (Ground Truth)
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve a la suite de plaintes selon lesquelles la NSA avait mis sur écoute le téléphone portable de la chanceliére allemande Angela Merkel
# LSTM Translation
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve suite aux affirmations selon lesquelles la NSA aurait pris le téléphone de Merkel de la chanceliére Angela
HyperLSTM Translation | 1609.09106#87 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 88 | HyperLSTM Translation
Les relations entre les Etats-Unis et 1â Allemagne ont été mises a rude épreuve aprés que la NSA a attaqué le téléphone de la chanceliére Angela Angela
# English Input
Germany âs BfV advises executives to consider using simple prepaid mobiles when on foreign trips because of the risk that smart phones are compromised
French (Ground Truth)
Le BfV dâ Allemagne conseille a ses dirigeants dâ envisager dâ utiliser de simples téléphones portables prépayés lors de leurs voyages a 1â étranger en raison du risque dâ atteinte a 1â intégrité des smartphones
LSTM Translation
Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de mobiles prépayés simples lors de voyages a 1â étranger en raison du risque de compromission des téléphones intelligents
HyperLSTM Translation | 1609.09106#88 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.09106 | 89 | HyperLSTM Translation
Le BfV allemand conseille aux dirigeants dâ envisager 1â utilisation de téléphones mobiles prépayés simples lors de voyages a 1â étranger en raison du risque que les téléphones intelligents soient compromis
English Input I was on the mid-evening news that same evening , and on TV the following day as well French (Ground Truth) Le soir-méme , je suis au 20h , le lendemain aussi je suis a la télé LSTM Translation Jâ @étais au milieu de 1â actualité le soir méme , et a la télévision le lendemain également HyperLSTM Translation Jâ étais au milieu de la soirée ce soir-la et a la télévision le lendemain
29 | 1609.09106#89 | HyperNetworks | This work explores hypernetworks: an approach of using a one network, also
known as a hypernetwork, to generate the weights for another network.
Hypernetworks provide an abstraction that is similar to what is found in
nature: the relationship between a genotype - the hypernetwork - and a
phenotype - the main network. Though they are also reminiscent of HyperNEAT in
evolution, our hypernetworks are trained end-to-end with backpropagation and
thus are usually faster. The focus of this work is to make hypernetworks useful
for deep convolutional networks and long recurrent networks, where
hypernetworks can be viewed as relaxed form of weight-sharing across layers.
Our main result is that hypernetworks can generate non-shared weights for LSTM
and achieve near state-of-the-art results on a variety of sequence modelling
tasks including character-level language modelling, handwriting generation and
neural machine translation, challenging the weight-sharing paradigm for
recurrent networks. Our results also show that hypernetworks applied to
convolutional networks still achieve respectable results for image recognition
tasks compared to state-of-the-art baseline models while requiring fewer
learnable parameters. | http://arxiv.org/pdf/1609.09106 | David Ha, Andrew Dai, Quoc V. Le | cs.LG | null | null | cs.LG | 20160927 | 20161201 | [
{
"id": "1603.09025"
}
] |
1609.08144 | 0 | 6 1 0 2
t c O 8 ] L C . s c [
2 v 4 4 1 8 0 . 9 0 6 1 : v i X r a
# Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi yonghui,schuster,zhifengc,qvl,[email protected]
Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeï¬ Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Åukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliï¬ Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduï¬ Hughes, Jeï¬rey Dean
# Abstract | 1609.08144#0 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 1 | Abstract Recent neural network sequence models with softmax classiï¬ers have achieved their best lan- guage modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction un- ambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classiï¬er. Our pointer sentinel- LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parame- ters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vo- cabularies and larger corpora we also introduce the freely available WikiText corpus.1 | 1609.07843#1 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 1 | Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference â sometimes prohibitively so in the case of very large data sets and large models. Several authors have also charged that NMT systems lack robustness, particularly when input sentences contain rare words. These issues have hindered NMTâs use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Googleâs Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using residual connections as well as attention connections from the decoder network to the encoder. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the ï¬nal translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a | 1609.08144#1 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 2 | QO 0 0 we {}â>+(}>- 10> 0-0 Fed Chair Janet Yellen... raised rates. Ms. [ 29? : . : A : r i (ee 5 : : z ' é H . ' ' Sentinel Pptr( Yellen) g % >| 2arqvark Bernanke Rosenthal Yellen zebra &2] + t 4 t . Zz] ' : ' 8 : [I : o a anll feooaoello r Pvocab( Yellen) p(Yellen) = g Pvocab(Yellen) + (1 â g) ppte(Yellen)
Figure 1. Illustration of the pointer sentinel-RNN mixture model. g is the mixture gate which uses the sentinel to dictate how much probability mass to give to the vocabulary.
states, in effect increasing hidden state capacity and pro- viding a path for gradients not tied to timesteps. Even with attention, the standard softmax classiï¬er that is being used in these models often struggles to correctly predict rare or previously unknown words.
# 1. Introduction | 1609.07843#2 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 2 | the ï¬nal translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (âwordpiecesâ) for both input and output. This method provides a good balance between the ï¬exibility of âcharacterâ-delimited models and the eï¬ciency of âwordâ-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. To directly optimize the translation BLEU scores, we consider reï¬ning the models by using reinforcement learning, but we found that the improvement in the BLEU scores did not reï¬ect in the human evaluation. On the WMTâ14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% | 1609.08144#2 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 3 | # 1. Introduction
A major difï¬culty in language modeling is learning when to predict speciï¬c words from the immediate context. For instance, imagine a new person is introduced and two para- graphs later the context would allow one to very accurately predict this personâs name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient prob- lem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.
Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden
Pointer networks (Vinyals et al., 2015) provide one poten- tial solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve per- formance on rare words and long-term dependencies they are unable to select words that do not exist in the input. | 1609.07843#3 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 4 | We introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standard softmax classiï¬ers with those of a pointer component for effective and efï¬- cient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the re- cent work of G¨ulc¸ehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocab- ulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this com- monly used dataset is small and no other freely available alternative exists that allows for learning long range depen- dencies, we also introduce a new benchmark dataset for language modeling called WikiText.
1Available for download at the WikiText dataset site
Pointer Sentinel Mixture Models
# Output Distribution
P(yn|wi,...,wWn-1)
Pointer Distribution Pptr(yn|w1, ..+,WN-1) Softmax ; & ' 1 1 ' \------- â Query 1! Softmax >| = RNN Distribution Pvocab(Yn|W1,---;Wn-1) | 1609.07843#4 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 4 | 1
# 1 Introduction
Neural Machine Translation (NMT) [41, 2] has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text. Its architecture typically consists of two recurrent neural networks (RNNs), one to consume the input text sequence and one to generate translated output text. NMT is often accompanied by an attention mechanism [2] which helps it cope eï¬ectively with long input sequences.
An advantage of Neural Machine Translation is that it sidesteps many brittle design choices in traditional phrase-based machine translation [26]. In practice, however, NMT systems used to be worse in accuracy than phrase-based translation systems, especially when training on very large-scale datasets as used for the very best publicly available translation systems. Three inherent weaknesses of Neural Machine Translation are
1 | 1609.08144#4 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 5 | Figure 2. Visualization of the pointer sentinel-RNN mixture model. T RNN, is used and the RNN y the pointer network to identify likely matching wor idden states. If the pointer component is not confident, probability m: [he query, produced from applying an MLP to the last output of the s from the past. The © nodes are inner products between the query can be directed to the RNN by increasing the value of the mixture gate g via the sentinel, seen in grey. If g = 1 then only the RNN is used. If g = 0 then only the pointer is used.
# 2. The Pointer Sentinel for Language Modeling
Given a sequence of words w1, . . . , wN predict the next word wN . â 1, our task is to
# 2.1. The softmax-RNN Component
# 2.2. The Pointer Network Component
In this section, we propose a modiï¬cation to pointer net- works for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence p(w1, . . . , wN 1) with the maximal attention score as the output. | 1609.07843#5 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 5 | 1
responsible for this gap: its slower training and inference speed, ineï¬ectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence. Firstly, it generally takes a considerable amount of time and computational resources to train an NMT system on a large-scale translation dataset, thus slowing the rate of experimental turnaround time and innovation. For inference they are generally much slower than phrase-based systems due to the large number of parameters used. Secondly, NMT lacks robustness in translating rare words. Though this can be addressed in principle by training a âcopy modelâ to mimic a traditional alignment model [31], or by using the attention mechanism to copy rare words [37], these approaches are both unreliable at scale, since the quality of the alignments varies across languages, and the latent alignments produced by the attention mechanism are unstable when the network is deep. Also, simple copying may not always be the best strategy to cope with rare words, for example when a transliteration is more appropriate. Finally, NMT systems sometimes produce output sentences that do not translate all parts of the input sentence â in other words, they fail to completely âcoverâ the input, which can result in surprising translations. | 1609.08144#5 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 6 | Recurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 2010) due to their ability to, at least in theory, retain long term depen- dencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: p(wi,..., wn) = TI, p(w: ..,Wi-1). More precisely, at each time step 7, we compute the RNN hidden state h; according to the previous hidden state h;_; and the input x; such that hy = RNN(a;,hi-1). When all the N â 1 words have been processed by the RNN, the final state hy_ is fed into a softmax layer which computes the probability over a vocabulary of possible words: W1,The simplest way to compute an attention score for a spe- ciï¬c hidden state is an inner product with all the past hid- RH . However, if den states h, with each hidden state hi â we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the in- ner product of a vector with itself results in the | 1609.07843#6 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 6 | This work presents the design and implementation of GNMT, a production NMT system at Google, that aims to provide solutions to the above problems. In our implementation, the recurrent networks are Long Short-Term Memory (LSTM) RNNs [23, 17]. Our LSTM RNNs have 8 layers, with residual connections between layers to encourage gradient ï¬ow [21]. For parallelism, we connect the attention from the bottom layer of the decoder network to the top layer of the encoder network. To improve inference time, we employ low-precision arithmetic for inference, which is further accelerated by special hardware (Googleâs Tensor Processing Unit, or TPU). To eï¬ectively deal with rare words, we use sub-word units (also known as âwordpiecesâ) [35] for inputs and outputs in our system. Using wordpieces gives a good balance between the ï¬exibility of single characters and the eï¬ciency of full words for decoding, and also sidesteps the need for special treatment of unknown words. Our beam search technique includes a length normalization procedure to deal eï¬ciently with the problem of comparing hypotheses of diï¬erent lengths during decoding, and a coverage penalty to encourage the model to translate all of the provided input. | 1609.08144#6 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.08144 | 7 | Our implementation is robust, and performs well on a range of datasets across many pairs of languages without the need for language-speciï¬c adjustments. Using the same implementation, we are able to achieve results comparable to or better than previous state-of-the-art systems on standard benchmarks, while delivering great improvements over Googleâs phrase-based production translation system. Speciï¬cally, on WMTâ14 English-to-French, our single model scores 38.95 BLEU, an improvement of 7.5 BLEU from a single model without an external alignment model reported in [31] and an improvement of 1.2 BLEU from a single model without an external alignment model reported in [45]. Our single model is also comparable to a single model in [45], while not making use of any alignment model as being used in [45]. Likewise on WMTâ14 English-to-German, our single model scores 24.17 BLEU, which is 3.4 BLEU better than a previous competitive baseline [6]. On production data, our implementation is even more eï¬ective. Human evaluations show that GNMT has reduced translation errors by 60% compared to our previous phrase-based system on many pairs of languages: English â | 1609.08144#7 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 8 | pvocab(w) = softmax(U hN 1), (1)
â
H , H is the hidden size, and where pvocab à V the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhuber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax.
q = tanh(W hN 1 + b), (2)
â RH . To generate the RH , and q where W pointer attention scores, we compute the match between the previous RNN output states hi and the query q by taking the inner product, followed by a softmax activation function to obtain a probability distribution:
zi = qT hi, a = softmax(z),
(3)
(4)
where z RL, a RL, and L is the total number of hidden
â
â
Pointer Sentinel Mixture Models
states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears:
is used and 1 means only the softmax-RNN is used.
xi) + (1 xi). (6)
# p(yi|
xi) = g pvocab(yi|
# g) pptr(yi|
â | 1609.07843#8 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 9 | xi) + (1 xi). (6)
# p(yi|
xi) = g pvocab(yi|
# g) pptr(yi|
â
> ai, (5) tel (w,x) Dptr(w) =
â
While the models could be entirely separate, we re-use many of the parameters for the softmax-RNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer net- workâs supervision for the RNN component.
where I(w, x) results in all positions of the word w in the RV . This technique, referred to as input x and pptr pointer sum attention, has been used for question answer- ing (Kadlec et al., 2016).
Given the length of the documents used in language mod- eling, it may not be feasible for the pointer network to eval- uate an attention score for all the words back to the begin- ning of the dataset. Instead, we may elect to maintain only a window of the L most recent words for the pointer to match against. The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically an- alyzing how frequently a word at position t appears within the last L words. | 1609.07843#9 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 9 | # 2 Related Work
Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades [3, 4, 5]. Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may diï¬er [26].
Even prior to the advent of direct Neural Machine Translation, neural networks have been used as a component within SMT systems with some success. Perhaps one of the most notable attempts involved the use of a joint language model to learn phrase representations [13] which yielded an impressive improvement when combined with phrase-based translation. This approach, however, still makes use of phrase-based translation systems at its core, and therefore inherits their shortcomings. Other proposed approaches for learning phrase representations [7] or learning end-to-end translation with neural networks [24] oï¬ered encouraging hints, but ultimately delivered worse overall accuracy compared to standard phrase-based systems.
The concept of end-to-end learning for machine translation has been attempted in the past (e.g., [8]) with
2 | 1609.08144#9 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 10 | To illustrate the advantages of this approach, consider a long article featuring two sentences President Obama dis- cussed the economy and President Obama then ï¬ew to If the query was Which President is the article Prague. about?, probability mass could be applied to Obama in If the question was instead Who ï¬ew to either sentence. Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This ï¬exibility provides supervision without forcing the model to put mass on supervision sig- nals that may be incorrect or lack proper context. This fea- ture becomes an important component in the pointer sen- tinel mixture model.
# 2.4. Details of the Gating Function
To compute the new pointer sentinel gate g, we modify the pointer component. In particular, we add an additional ele- ment to z, the vector of attention scores as deï¬ned in Eq. 3. This element is computed using an inner product between RH . This change the query and the sentinel2 vector s can be summarized by changing Eq. 4 to:
a = softmax ( [z; q's) . (7) | 1609.07843#10 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 10 | The concept of end-to-end learning for machine translation has been attempted in the past (e.g., [8]) with
2
limited success. Following seminal papers in the area [41, 2], NMT translation quality has crept closer to the level of phrase-based translation systems for common research benchmarks. Perhaps the ï¬rst successful attempt at surpassing phrase-based translation was described in [31]. On WMTâ14 English-to-French, this system achieved a 0.5 BLEU improvement compared to a state-of-the-art phrase-based system.
Since then, many novel techniques have been proposed to further improve NMT: using an attention mechanism to deal with rare words [37], a mechanism to model translation coverage [42], multi-task and semi-supervised training to incorporate more data [14, 29], a character decoder [9], a character encoder [11], subword units [38] also to deal with rare word outputs, diï¬erent kinds of attention mechanisms [30], and sentence-level loss minimization [39, 34]. While the translation accuracy of these systems has been encouraging, systematic comparison with large scale, production quality phrase-based translation systems has been lacking.
# 3 Model Architecture | 1609.08144#10 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 11 | a = softmax ( [z; q's) . (7)
RV +1 to be the attention distribution over We deï¬ne a both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: g = a[V + 1].
Any probability mass assigned to g is given to the stan- dard softmax vocabulary of the RNN. The ï¬nal updated, normalized pointer probability over the vocabulary in the window then becomes:
pptr(yi| xi) = 1 1 g a[1 : V ], (8)
â
where we denoted [1 : V ] to mean the ï¬rst V elements of the vector. The ï¬nal mixture model is the same as Eq. 6 but with the updated Eq. 8 for the pointer probability.
# 2.3. The Pointer Sentinel Mixture Model
While pointer networks have proven to be effective, they cannot predict output words that are not present in the in- put, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer. | 1609.07843#11 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 11 | # 3 Model Architecture
Our model (see Figure 1) follows the common sequence-to-sequence learning framework [41] with attention [2]. It has three components: an encoder network, a decoder network, and an attention network. The encoder transforms a source sentence into a list of vectors, one vector per input symbol. Given this list of vectors, the decoder produces one symbol at a time, until the special end-of-sentence symbol (EOS) is produced. The encoder and decoder are connected through an attention module which allows the decoder to focus on diï¬erent regions of the source sentence during the course of decoding.
For notation, we use bold lower case to denote vectors (e.g., v, oi), bold upper case to represent matrices (e.g., U, W), cursive upper case to represent sets (e.g., V , T ), capital letters to represent sequences (e.g. X, Y ), and lower case to represent individual symbols in a sequence, (e.g., x1, x2). | 1609.08144#11 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 12 | This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. This competition, in par- ticular, was crucial to obtain our best model. By integrating the gating function directly into the pointer computation, it is inï¬uenced by both the RNN hidden state and the pointer windowâs hidden states.
# 2.5. Motivation for the Sentinel as Gating Function
Our mixture model has two base distributions: the softmax vocabulary of the RNN output and the positional vocabu- lary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating func- xi) where zi is the latent variable stating tion g = p(zi = k which base distribution the data point belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A value of 0 implies that only the pointer
To make the best decision possible regarding which compo- nent to use the gating function must have as much context as possible. As we increase both the number of timesteps and the window of words for the pointer component to con- sider, the RNN hidden state by itself isnât guaranteed to | 1609.07843#12 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 12 | Let (X, Y ) be a source and target sentence pair. Let X = x1, x2, x3, ..., xM be the sequence of M symbols in the source sentence and let Y = y1, y2, y3, ..., yN be the sequence of N symbols in the target sentence. The encoder is simply a function of the following form:
x1, x2, ..., xM = EncoderRN N (x1, x2, x3, ..., xM ) (1)
In this equation, x1, x2, ..., xM is a list of ï¬xed size vectors. The number of members in the list is the same as the number of symbols in the source sentence (M in this example). Using the chain rule the conditional probability of the sequence P (Y |X) can be decomposed as:
P (Y |X) = P (Y |x1, x2, x3, ..., xM) = N Y P (yi|y0, y1, y2, ..., yiâ1; x1, x2, x3, ..., xM) i=1 (2) | 1609.08144#12 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 13 | 2A sentinel value is inserted at the end of a search space in or- der to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.
Pointer Sentinel Mixture Models
accurately recall the identity or order of words it has re- cently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable length sequence into a ï¬xed dimen- sionality vector.
no penalty and the loss is entirely determined by the loss of the softmax-RNN component.
# 2.7. Parameters and Computation Time
In our task, where we may want a pointer window where the length L is in the hundreds, accurately modeling all of this information within the RNN hidden state is impracti- cal. The position of speciï¬c words is also a vital feature as relevant words eventually fall out of the pointer compo- nentâs window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond what the ï¬xed dimensionality hidden state of an RNN is able to accurately capture. | 1609.07843#13 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 13 | where y0 is a special âbeginning of sentenceâ symbol that is prepended to every target sentence. During inference we calculate the probability of the next symbol given the source sentence encoding and the decoded target sequence so far:
P (yi|y0, y1, y2, y3, ..., yiâ1; x1, x2, x3, ..., xM) (3)
Our decoder is implemented as a combination of an RNN network and a softmax layer. The decoder RNN network produces a hidden state yi for the next symbol to be predicted, which then goes through the softmax layer to generate a probability distribution over candidate output symbols.
In our experiments we found that for NMT systems to achieve good accuracy, both the encoder and decoder RNNs have to be deep enough to capture subtle irregularities in the source and target languages. This observation is similar to previous observations that deep LSTMs signiï¬cantly outperform shallow LSTMs [41]. In that work, each additional layer reduced perplexity by nearly 10%. Similar to [31], we use a deep stacked Long Short Term Memory (LSTM) [23] network for both the encoder RNN and the decoder RNN. | 1609.08144#13 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 14 | For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The deci- sion to back-off to the softmax vocabulary is then informed by both the query q, generated using the RNN hidden state 1, and from the contents of the hidden states in the hN pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer win- dow and avoid having to maintain state for when a word may have fallen out of the pointer window.
The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the size of the mod- els required to achieve similar performance using standard LSTM models.
The only two additional parameters required by the model are those required for computing q, speciï¬cally W â RH RH , and the sentinel vector embedding, H and b à RH . This is independent of the depth of the RNN as s the the pointer component only interacts with the output of the ï¬nal RNN layer. The additional H 2 + 2H parameters are minor compared to a single LSTM layerâs 8H 2 + 4H parameters. Most state of the art models also require mul- tiple LSTM layers. | 1609.07843#14 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 15 | In terms of additional computation, a pointer sentinel- LSTM of window size L only requires computing the query q (a linear layer with tanh activation), a total of L parallelizable inner product calculations, and the attention scores for the L resulting scalars via the softmax function.
# 2.6. Pointer Sentinel Loss Function
of is a one hot encod- â ing of the correct output. During training, as Ëyi is one hot, only a single mixed probability p(yij) must be computed for calculating the loss. This can result in a far more efï¬cient GPU implementation. At prediction time, when xi), a maximum of L word we want all values for p(yi| probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efï¬cient than the GPU.
Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output Ëyi if it exists in the input. In the case of our mixture model the pointer loss instead becomes:
âlog | g+ > a; |, (9) iâ¬I(y,a) | 1609.07843#15 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 15 | Figure 1: The model architecture of GNMT, Googleâs Neural Machine Translation system. On the left is the encoder network, on the right is the decoder network, in the middle is the attention module. The bottom encoder layer is bi-directional: the pink nodes gather information from left to right while the green nodes gather information from right to left. The other layers of the encoder are uni-directional. Residual connections start from the layer third from the bottom in the encoder and decoder. The model is partitioned into multiple GPUs to speed up training. In our setup, we have 8 encoder LSTM layers (1 bi-directional layer and 7 uni-directional layers), and 8 decoder layers. With this setting, one model replica is partitioned 8-ways and is placed on 8 diï¬erent GPUs typically belonging to one host machine. During training, the bottom bi-directional encoder layers compute in parallel ï¬rst. Once both ï¬nish, the uni-directional encoder layers can start computing, each on a separate GPU. To retain as much parallelism as possible during running the decoder layers, we use the bottom decoder layer output only for obtaining recurrent attention context, which | 1609.08144#15 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 16 | âlog | g+ > a; |, (9) iâ¬I(y,a)
i â where I(y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probabil- ity mass if, for instance, the correct output Ëyi exists only in the softmax-RNN vocabulary. Furthermore, there is no penalty if the model places the entire probability mass, on any of the instances of the correct word in the input win- dow. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs
# 3. Related Work
Considerable research has been dedicated to the task of lan- guage modeling, from traditional machine learning tech- niques such as n-grams to neural sequence models in deep learning.
Mixture models composed of various knowledge sources have been proposed in the past for language modeling. Rosenfeld (1996) uses a maximum entropy model to com- bine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words. The n- gram cache could be considered similar in some ways to our modelâs pointer network, where rare or contextually relevant words are stored for later use. | 1609.07843#16 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 16 | GPU. To retain as much parallelism as possible during running the decoder layers, we use the bottom decoder layer output only for obtaining recurrent attention context, which is sent directly to all the remaining decoder layers. The softmax layer is also partitioned and placed on multiple GPUs. Depending on the output vocabulary size we either have them run on the same GPUs as the encoder and decoder networks, or have them run on a separate set of dedicated GPUs. | 1609.08144#16 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 17 | Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results (Mikolov et al., 2010). A variety of RNN regular- ization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015) which prevent overï¬tting of complex LSTM language models. Other work has improved language modeling performance by modifying the RNN architecture to better handle in- creased recurrence depth (Zilly et al., 2016).
In order to increase capacity and minimize the impact of vanishing gradients, some language and translation modPointer Sentinel Mixture Models
Penn Treebank Valid Train Test Train WikiText-2 Valid Test Train WikiText-103 Valid Test Articles Tokens - 929,590 - 73,761 - 82,431 600 2,088,628 60 217,646 60 245,569 28,475 103,227,021 60 217,646 60 245,569 Vocab size OoV rate 10,000 4.8% 33,278 2.6% 267,735 0.4% | 1609.07843#17 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 17 | context ai for the current time step is computed according to the following formulas:
st = AttentionF unction(yiâ1, xt) ât, 1 ⤠t ⤠M pt = exp(st)/ M X exp(st) ât, 1 ⤠t ⤠M t=1 ai = M X pt.xt t=1 (4)
where AttentionF unction in our implementation is a feed forward network with one hidden layer.
# 3.1 Residual Connections
As mentioned above, deep stacked LSTMs often give better accuracy over shallower models. However, simply stacking more layers of LSTM works only to a certain number of layers, beyond which the network becomes
4
too slow and diï¬cult to train, likely due to exploding and vanishing gradient problems [33, 22]. In our experience with large-scale translation tasks, simple stacked LSTM layers work well up to 4 layers, barely with 6 layers, and very poorly beyond 8 layers. | 1609.08144#17 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 18 | Table 1. Statistics of the Penn Treebank, WikiText-2, and WikiText-103. The out of vocabulary (OoV) rate notes what percentage of tokens have been replaced by an (unk) token. The token count includes newlines which add to the structure of the WikiText datasets.
els have also added a soft attention or memory compo- nent (Bahdanau et al., 2015; Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016). These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mech- anisms need to ï¬rst encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hid- den state or memory. A drawback to soft attention is that if, for instance, January and March are both equally at- tended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to Febru- ary (Kadlec et al., 2016). Even with attention, the standard softmax classiï¬er being used in these models often strug- gles to correctly predict rare or previously unknown words. | 1609.07843#18 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 18 | © ®@ ® 2 9 ® + A LSTM, } cara oo >â) fâ) (stâ¢,} > Th) âa STN2) »f xt xt x3 (3) â0 Ly. iN spt = e CO â G apt = i XX (1stm, }>{ Lstm,}â>{ LstM,}>( LSTM, } EN A a » © © ©
Figure 2: The diï¬erence between normal stacked LSTM and our stacked LSTM with residual connections. On the left: simple stacked LSTM layers [41]. On the right: our implementation of stacked LSTM layers with residual connections. With residual connections, input to the bottom LSTM layer (x0 i âs to LSTM1) is element-wise added to the output from the bottom layer (x1 i âs). This sum is then fed to the top LSTM layer (LSTM2) as the new input. | 1609.08144#18 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 19 | Attention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer network is able to select elements from the input as output. In the above example, only January or March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code genera- tion (Ling et al., 2016), summarization (Gu et al., 2016; G¨ulc¸ehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
according to the switching network and the word or loca- tion with the highest ï¬nal attention score is selected for out- put. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous. | 1609.07843#19 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 19 | Motivated by the idea of modeling diï¬erences between an intermediate layerâs output and the targets, which has shown to work well for many projects in the past [16, 21, 40], we introduce residual connections among the LSTM layers in a stack (see Figure 2). More concretely, let LSTMi and LSTMi+1 be the i-th and (i + 1)-th LSTM layers in a stack, whose parameters are Wi and Wi+1 respectively. At the t-th time step, for the stacked LSTM without residual connections, we have:
tâ1, xiâ1 t = LSTMi(ci t, mi ci t = mi xi t t = LSTMi+1(ci+1 , mi+1 tâ1, mi ; Wi) t tâ1, mi+1 tâ1, xi t; Wi+1) (5)
# ci+1 t
where xi LSTMi at time step t, respectively. t is the input to LSTMi at time step t, and mi t and ci t are the hidden states and memory states of
With residual connections between LSTMi and LSTMi+1, the above equations become: | 1609.08144#19 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 20 | Extending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence condi- tioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one charac- ter at a time using a standard softmax or instead copy entire words from referenced text ï¬elds using a pointer network. As opposed to G¨ulc¸ehre et al. (2016), all states which pro- duce the same output are merged by summing their prob- abilities. Their model however requires a more complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential explo- sion in potential paths.
# 4. WikiText - A Benchmark for Language Modeling | 1609.07843#20 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 20 | With residual connections between LSTMi and LSTMi+1, the above equations become:
ci+1 t t = LSTMi(ci t, mi ci t + xiâ1 xi t = mi t t = LSTMi+1(ci+1 , mi+1 tâ1, mi tâ1, xiâ1 ; Wi) t tâ1, mi+1 tâ1, xi t; Wi+1) (6)
Residual connections greatly improve the gradient ï¬ow in the backward pass, which allows us to train very deep encoder and decoder networks. In most of our experiments, we use 8 LSTM layers for the encoder and decoder, though residual connections can allow us to train substantially deeper networks (similar to what was observed in [45]).
# 3.2 Bi-directional Encoder for First Layer
For translation systems, the information required to translate certain words on the output side can appear anywhere on the source side. Often the source side information is approximately left-to-right, similar to
5
# LY
the target side, but depending on the language pair the information for a particular output word can be distributed and even be split up in certain regions of the input side. | 1609.08144#20 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 21 | # 4. WikiText - A Benchmark for Language Modeling
G¨ulc¸ehre et al. (2016) introduce a pointer softmax model that can generate output the vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer softmax model is able to better deal with rare and unknown words than a model only featuring an RNN softmax. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is condi- tioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for switching network as in our model. The pointer and RNN softmax are scaled
We ï¬rst describe the most commonly used language model- ing dataset and its pre-processing in order to then motivate the need for a new benchmark dataset.
# 4.1. Penn Treebank | 1609.07843#21 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 21 | 5
# LY
the target side, but depending on the language pair the information for a particular output word can be distributed and even be split up in certain regions of the input side.
To have the best possible context at each point in the encoder network it makes sense to use a bi-directional RNN [36] for the encoder, which was also used in [2]. To allow for maximum possible parallelization during computation (to be discussed in more detail in section 3.3), bi-directional connections are only used for the bottom encoder layer â all other encoder layers are uni-directional. Figure 3 illustrates our use of bi-directional LSTMs at the bottom encoder layer. The layer LSTMf processes the source sentence from left to right, while the layer LSTMb processes the source sentence from right to left. Outputs from LSTMf ( t) and LSTMb ââ xb (
@ @ Bidirectional â¢, Bottom Layer
: | 1609.08144#21 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 22 | # 4.1. Penn Treebank
In order to compare our model to the many recent neural language models, we conduct word-level prediction exper- iments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training words, 73k validation words, and 82k test words. As part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, numbers were replaced with N, newlines were replaced with (eos), and all other punctuation was removed. The vocabulary is the most frequent 10k words with the rest of the tokens bePointer Sentinel Mixture Models
ing replaced by an (unk) token. For full statistics, refer to Table 1.
Algorithm 1 Calculate truncated BPTT where every k1 timesteps we run back propagation for k2 timesteps
for t = 1 to t = T do
# 4.2. Reasons for a New Dataset
Run the RNN for one step, computing ht and zt | 1609.07843#22 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 22 | @ @ Bidirectional â¢, Bottom Layer
:
Figure 3: The structure of bi-directional connections in the ï¬rst layer of the encoder. LSTM layer LSTMf processes information from left to right, while LSTM layer LSTMb processes information from right to left. Output from LSTMf and LSTMb are ï¬rst concatenated and then fed to the next LSTM layer LSTM1.
# 3.3 Model Parallelism
Due to the complexity of our model, we make use of both model parallelism and data parallelism to speed up training. Data parallelism is straightforward: we train n model replicas concurrently using a Downpour SGD algorithm [12]. The n replicas all share one copy of model parameters, with each replica asynchronously updating the parameters using a combination of Adam [25] and SGD algorithms. In our experiments, n is often around 10. Each replica works on a mini-batch of m sentence pairs at a time, which is often 128 in our experiments. | 1609.08144#22 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 23 | for t = 1 to t = T do
# 4.2. Reasons for a New Dataset
Run the RNN for one step, computing ht and zt
While the processed version of the PTB above has been frequently used for language modeling, it has many limi- tations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabu- laries with many rare words are involved. Fig. 3 illustrates this using a Zipï¬an plot over the training partition of the PTB. The curve stops abruptly when hitting the 10k vocab- ulary. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail for the vocabulary is problematic.
# if t divides k1 then
Run BPTT from t down to t
â
# end if end for
same format and following the same conventions as that of the PTB dataset above.
# 4.4. Statistics | 1609.07843#23 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 23 | In addition to data parallelism, model parallelism is used to improve the speed of the gradient computation on each replica. The encoder and decoder networks are partitioned along the depth dimension and are placed on multiple GPUs, eï¬ectively running each layer on a diï¬erent GPU. Since all but the ï¬rst encoder layer are uni-directional, layer i + 1 can start its computation before layer i is fully ï¬nished, which improves training speed. The softmax layer is also partitioned, with each partition responsible for a subset of symbols in the output vocabulary. Figure 1 shows more details of how partitioning is done.
Model parallelism places certain constraints on the model architectures we can use. For example, we cannot aï¬ord to have bi-directional LSTM layers for all the encoder layers, since doing so would reduce parallelism among subsequent layers, as each layer would have to wait until both forward and backward directions of the previous layer have ï¬nished. This would eï¬ectively constrain us to make use of only 2 GPUs
6 | 1609.08144#23 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 24 | Run BPTT from t down to t
â
# end if end for
same format and following the same conventions as that of the PTB dataset above.
# 4.4. Statistics
Other larger scale language modeling datasets exist. Un- fortunately, they either have restrictive licensing which pre- vents widespread use or have randomized sentence order- ing (Chelba et al., 2013) which is unrealistic for most lan- guage use and prevents the effective learning and evalua- tion of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and will make this available to the community.
# 4.3. Construction and Pre-processing
We selected articles only ï¬tting the Good or Featured ar- ticle criteria speciï¬ed by editors on Wikipedia. These ar- ticles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each arti- cle was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling. | 1609.07843#24 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 24 | 6
in parallel (one for the forward direction and one for the backward direction). For the attention portion of the model, we chose to align the bottom decoder output to the top encoder output to maximize parallelism when running the decoder network. Had we aligned the top decoder layer to the top encoder layer, we would have removed all parallelism in the decoder network and would not beneï¬t from using more than one GPU for decoding.
# 4 Segmentation Approaches
Neural Machine Translation models often operate with ï¬xed word vocabularies even though translation is fundamentally an open vocabulary problem (names, numbers, dates etc.). There are two broad categories of approaches to address the translation of out-of-vocabulary (OOV) words. One approach is to simply copy rare words from source to target (as most rare words are names or numbers where the correct translation is just a copy), either based on the attention model [37], using an external alignment model [31], or even using a more complicated special purpose pointing network [18]. Another broad category of approaches is to use sub-word units, e.g., chararacters [10], mixed word/characters [28], or more intelligent sub-words [38].
# 4.1 Wordpiece Model | 1609.08144#24 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 25 | Once extracted, speciï¬c sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LATEX code, were replaced with tokens. Normaliza- tion and tokenization were performed using the Moses to- kenizer (Koehn et al., 2007), slightly augmented to further 8 @,@ 600) and with some addi- split numbers (8,600 tional minor ï¬xes. Following Chelba et al. (2013) a vocab- ulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to token, also a part of the vocabulary. the
# (unk)
The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language mod- eling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks. | 1609.07843#25 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 25 | # 4.1 Wordpiece Model
Our most successful approach falls into the second category (sub-word units), and we adopt the wordpiece model (WPM) implementation initially developed to solve a Japanese/Korean segmentation problem for the Google speech recognition system [35]. This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters. It is similar to the method used in [38] to deal with rare words in Neural Machine Translation.
For processing arbitrary words, we ï¬rst break words into wordpieces given a trained wordpiece model. Special word boundary symbols are added before training of the model such that the original word sequence can be recovered from the wordpiece sequence without ambiguity. At decoding time, the model ï¬rst produces a wordpiece sequence, which is then converted into the corresponding word sequence. Here is an example of a word sequence and the corresponding wordpiece sequence:
⢠Word: Jet makers feud over seat width with big orders at stake
⢠wordpieces: _J et _makers _fe ud _over _seat _width _with _big _orders _at _stake | 1609.08144#25 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 26 | The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original cas- ing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing with the only difference being the vocabularies. For full statistics, refer to Table 1.
# 5. Experiments
# 5.1. Training Details
As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back, this presents a chal- lenge for training. If we do not regenerate the stale his- torical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale out- puts of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the win- dow of RNN outputs used by the pointer component after each gradient update.
We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efï¬cient training of RNN models but has fundamental trade-offs that are rarely discussed. | 1609.07843#26 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.07843 | 27 | To ensure the dataset is immediately usable by existing lan- guage modeling tools, we have provided the dataset in the
For running truncated BPTT, BPTT is run for k2 timesteps every k1 timesteps, as seen in Algorithm 1. For many RNN
Pointer Sentinel Mixture Models
Zipf plot for Penn Treebank 10° 105 the <unk> N g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token Zipf plot for WikiText-2 10° g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Zipf plot for Penn Treebank 10° 105 the <unk> N g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Zipf plot for WikiText-2 10° g Absolute frequency of token 8 8 10% 10° 5 10° 10° 102 103 104 105 Frequency rank of token
Figure 3. Zipï¬an plot over the training partition in Penn Treebank and WikiText-2 datasets. Notice the severe drop on the Penn Treebank when the vocabulary hits 104. Two thirds of the vocabulary in WikiText-2 are past the vocabulary cut-off of the Penn Treebank. | 1609.07843#27 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 27 | The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word deï¬nition. Given a training corpus and a number of desired tokens D, the optimization problem is to select D wordpieces such that the resulting corpus is minimal in the number of wordpieces when segmented according to the chosen wordpiece model. Our greedy algorithm to this optimization problem is similar to [38] and is described in more detail in [35]. Compared to the original implementation used in [35], we use a special symbol only at the beginning of the words and not at both ends. We also cut the number of basic characters to a manageable number depending on the data (roughly 500 for Western languages, more for Asian languages) and map the rest to a special unknown character to avoid polluting the given wordpiece vocabulary with very rare characters. We ï¬nd that using a total vocabulary of between 8k and 32k wordpieces achieves both good accuracy (BLEU scores) and fast decoding speed across all pairs of language pairs we have tried. | 1609.08144#27 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 28 | language modeling training schemes, k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps. This results in only a single RNN output receiving backpropagation for k timesteps, with the other extreme being that the ï¬rst token receives backprop- agation for 0 timesteps. This issue is compounded by the fact that most language modeling code split the data tem- porally such that the boundaries are always the same. As such, most words in the training data will never experience a full backpropagation for k timesteps.
the pointer component always looks L In our task, timesteps into the past if L past timesteps are available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for L timesteps and advance one timestep at a time. Only the loss for the ï¬nal predicted word is used for backpropagation through the window.
ration which features a hidden size of 1500 and a two layer LSTM. | 1609.07843#28 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 28 | As mentioned above, in translation it often makes sense to copy rare entity names or numbers directly from the source to the target. To facilitate this type of direct copying, we always use a shared wordpiece model for both the source language and target language. Using this approach, it is guaranteed that the same string in source and target sentence will be segmented in exactly the same way, making it easier for the system to learn to copy these tokens.
Wordpieces achieve a balance between the ï¬exibility of characters and eï¬ciency of words. We also ï¬nd that our models get better overall BLEU scores when using wordpieces â possibly due to the fact that our models now deal eï¬ciently with an essentially inï¬nite vocabulary without resorting to characters only. The
7
latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation.
# 4.2 Mixed Word/Character Model | 1609.08144#28 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 29 | ration which features a hidden size of 1500 and a two layer LSTM.
We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel- LSTM model. The variants of dropout used were zone- out (Krueger et al., 2016) and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Varia- tional inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the ï¬nal RNN layer. We used a value of 0.5 for both dropout connections.
# 5.3. Comparison over Penn Treebank
# 5.2. Model Details | 1609.07843#29 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 29 | 7
latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation.
# 4.2 Mixed Word/Character Model
A second approach we use is the mixed word/character model. As in a word model, we keep a ï¬xed-size word vocabulary. However, unlike in a conventional word model where OOV words are collapsed into a single UNK symbol, we convert OOV words into the sequence of its constituent characters. Special preï¬xes are prepended to the characters, to 1) show the location of the characters in a word, and 2) to distinguish them from normal in-vocabulary characters. There are three preï¬xes: <B>,<M>, and <E>, indicating beginning of the word, middle of the word and end of the word, respectively. For example, letâs assume the word Miki is not in the vocabulary. It will be preprocessed into a sequence of special tokens: <B>M <M>i <M>k <E>i. The process is done on both the source and the target sentences. During decoding, the output may also contain sequences of special tokens. With the preï¬xes, it is trivial to reverse the tokenization to the original words as part of a post-processing step.
# 5 Training Criteria | 1609.08144#29 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 30 | Our experimental setup reï¬ects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L. Batch size was increased to 32 from 20. We also halve the learning rate when valida- tion perplexity is worse than the previous iteration, stop- ping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gra- dients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model conï¬gura- tion which features a hidden size of H = 650 and a two layer LSTM. We compare against the large model conï¬guTable 2 compares the pointer sentinel-LSTM to a vari- ety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, fol- lowed by the recent Recurrent Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM mod- els. Note that the best performing large variational LSTM model uses computationally intensive | 1609.07843#30 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 30 | # 5 Training Criteria
Given a dataset of parallel text containing N input-output sequence pairs, denoted D = {(x®, yyy standard maximum-likelihood training aims at maximizing the sum of log probabilities of the ground-truth outputs given the corresponding inputs, ie
OML(θ) = N X log Pθ(Y â(i) | X (i)) . i=1 (7)
The main problem with this objective is that it does not reï¬ect the task reward function as measured by the BLEU score in translation. Further, this objective does not explicitly encourage a ranking among incorrect output sequences â where outputs with higher BLEU scores should still obtain higher probabilities under the model â since incorrect outputs are never observed during training. In other words, using maximum-likelihood training only, the model will not learn to be robust to errors made during decoding since they are never observed, which is quite a mismatch between the training and testing procedure. | 1609.08144#30 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 31 | model also achieves lower perplexity than the large LSTM mod- els. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, speciï¬- cally with less than a third the parameters used in the large variational LSTM models. | 1609.07843#31 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 31 | Several recent papers [34, 39, 32] have considered diï¬erent ways of incorporating the task reward into optimization of neural sequence-to-sequence models. In this work, we also attempt to reï¬ne a model pre- trained on the maximum likelihood objective to directly optimize for the task reward. We show that, even on large datasets, reï¬nement of state-of-the-art maximum-likelihood models using task reward improves the results considerably.
We consider model reï¬nement using the expected reward objective (also used in [34]), which can be expressed as
ORL(θ) = N X X Pθ(Y | X (i)) r(Y, Y â(i)). i=1 Y âY (8)
Here, r(Y, Y â(i)) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y , up to a certain length. | 1609.08144#31 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 32 | 3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experi- ence excessively high perplexity, though this settles rapidly.
We also test a variational LSTM that uses zoneout, which
Pointer Sentinel Mixture Models
serves as the RNN component of our pointer sentinel- LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model abla- tion are worse than that of Gal (2015)âs variational LSTM without Monte Carlo dropout averaging.
# 5.4. Comparison over WikiText-2
As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and the medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyper- parameters from PTB experiments for all models. | 1609.07843#32 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 32 | Here, r(Y, Y â(i)) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y , up to a certain length.
The BLEU score has some undesirable properties when used for single sentences, as it was designed to be a corpus measure. We therefore use a slightly diï¬erent score for our RL experiments which we call the âGLEU scoreâ. For the GLEU score, we record all sub-sequences of 1, 2, 3 or 4 tokens in output and target sequence (n-grams). We then compute a recall, which is the ratio of the number of matching n-grams to the number of total n-grams in the target (ground truth) sequence, and a precision, which is the ratio of the number of matching n-grams to the number of total n-grams in the generated output sequence. Then GLEU score is simply the minimum of recall and precision. This GLEU scoreâs range is always between 0 (no matches) and 1 (all match) and it is symmetrical when switching output and target. According to our experiments, GLEU score correlates quite well with the BLEU metric on a corpus level but does not have its drawbacks for our per sentence reward objective.
8 | 1609.08144#32 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 33 | better) Sy a nN ° 0.5 Mean difference in log perplexity (higher 0.0, 2 3 4 5 6 7 8 9 10 Word buckets of equal size (frequent words on left)
Figure 4. Mean difference in log perplexity on PTB when using the pointer sentinel-LSTM compared to the LSTM model. Words were sorted by frequency and split into equal sized buckets.
Table 3 shows a similar gain made by the pointer sentinel- LSTM over the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments.
# 6. Analysis
# 6.1. Impact on Rare Words
# 6.2. Qualitative Analysis of Pointer Usage
In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the supplementary material.
A hypothesis as to why the pointer sentinel-LSTM can out- perform an LSTM is that the pointer component allows the model to effectively reproduce rare words. An RNN may be able to better use the hidden state capacity by deferring to the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax. | 1609.07843#33 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 33 | 8
As is common practice in reinforcement learning, we subtract the mean reward from r(Y, Y â(i)) in equation 8. The mean is estimated to be the sample mean of m sequences drawn independently from distribution Pθ(Y | X (i)). In our implementation, m is set to be 15. To further stabilize training, we optimize a linear combination of ML (equation 7) and RL (equation 8) objectives as follows:
OMixed(θ) = α â OML(θ) + ORL(θ) (9)
α in our implementation is typically set to be 0.017.
In our setup, we ï¬rst train a model using the maximum likelihood objective (equation 7) until convergence. We then reï¬ne this model using a mixed maximum likelihood and expected reward objective (equation 9), until BLEU score on a development set is no longer improving. The second step is optional.
# 6 Quantizable Model and Quantized Inference | 1609.08144#33 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 34 | Figure 4 shows the improvement of perplexity when com- paring the LSTM to the pointer sentinel-LSTM with words split across buckets according to frequency. It shows that the pointer sentinel-LSTM has stronger improvements as words become rarer. Even on the Penn Treebank, where there is a relative absence of rare words due to only select- ing the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct beneï¬t.
While the improvements are largest on rare words, we can see that the pointer sentinel-LSTM is still helpful on rela- tively frequent words. This may be the pointer component directly selecting the word or through the pointer supervi- sion signal improving the RNN by allowing gradients to ï¬ow directly to other occurrences of the word in that win- dow.
4https://github.com/yaringal/BayesianRNN
As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training). | 1609.07843#34 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 34 | # 6 Quantizable Model and Quantized Inference
One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation diï¬cult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can signiï¬cantly reduce the cost of inference for these models, often providing eï¬ciency improvements on the same computational devices. For example, in [43], it is demonstrated that a convolutional neural network model can be sped up by a factor of 4-6 with minimal loss on classiï¬cation accuracy on the ILSVRC-12 benchmark. In [27], it is demonstrated that neural network model weights can be quantized to only three states, -1, 0, and +1.
Many of those previous studies [19, 20, 43, 27] however mostly focus on CNN models with relatively few layers. Deep LSTMs with long sequences pose a novel challenge in that quantization errors can be signiï¬cantly ampliï¬ed after many unrolled steps or after going through a deep LSTM stack. | 1609.08144#34 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
1609.07843 | 35 | As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training).
The pointer component was also heavily used when it came to other named entity names such as companies like Honey- well (8 times in training) and Integrated (41 times in train- ing, though due to lowercasing of words this includes inte- grated circuits, fully integrated, and other generic usage).
Surprisingly, the pointer component was also used for many frequent tokens. For selecting the unit of measure- ment (tons, kilograms, . . . ) or the short scale of numbers (thousands, millions, billions, . . . ), the pointer would refer to previous recent usage. This is to be expected, especially when phrases are of the form increased from N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer for predicting certain frequent verbs such as said. | 1609.07843#35 | Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus. | http://arxiv.org/pdf/1609.07843 | Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher | cs.CL, cs.AI | null | null | cs.CL | 20160926 | 20160926 | [
{
"id": "1607.03474"
},
{
"id": "1608.04207"
},
{
"id": "1606.01305"
},
{
"id": "1603.08148"
},
{
"id": "1603.01547"
},
{
"id": "1512.05287"
}
] |
1609.08144 | 35 | In this section, we present our approach to speed up inference with quantized arithmetic. Our solution is tailored towards the hardware options available at Google. To reduce quantization errors, additional constraints are added to our model during training so that it is quantizable with minimal impact on the output of the model. That is, once a model is trained with these additional constraints, it can be subsequently quantized without loss to translation quality. Our experimental results suggest that those additional constraints do not hurt model convergence nor the quality of a model once it has converged.
Recall from equation 6 that in an LSTM stack with residual connections there are two accumulators: ci t along the time axis and xi t along the depth axis. In theory, both of the accumulators are unbounded, but in practice, we noticed their values remain quite small. For quantized inference, we explicitly constrain the values of these accumulators to be within [-δ, δ] to guarantee a certain range that can be used for quantization later. The forward computation of an LSTM stack with residual connections is modiï¬ed to the following: | 1609.08144#35 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system. | http://arxiv.org/pdf/1609.08144 | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20160926 | 20161008 | [
{
"id": "1603.06147"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.