doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.08803 | 44 | Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation}19} where the x-axis corresponds to ¢, and the y-axis to ¢â, and where ¢, 0â ⬠{0,4,---, = :
19
Published as a conference paper at ICLR 2017
Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation [19] where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 â¬{0, 5,0, F}.
20
Published as a conference paper at ICLR 2017
Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation[19] where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 â¬{0, 5,05, Fh.
21
Published as a conference paper at ICLR 2017 | 1605.08803#44 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 45 | Figure 16: Manifold from a model trained on LSUN (tower category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation|19| where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 ⬠{05,05 , Fh.
# C Extrapolation
Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following ï¬gures, our model seems to successfully create a âtextureâ representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops.
22
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 17: We generate samples a factor bigger than the training set image size on Imagenet (64Ã64).
23
Published as a conference paper at ICLR 2017
(a) Ã2 | 1605.08803#45 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 46 | (a) Ã2
(b) Ã10
Figure 18: We generate samples a factor bigger than the training set image size on CelebA.
24
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category).
25
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category).
26
Published as a conference paper at ICLR 2017
(a) Ã2 | 1605.08803#46 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 47 | (a) Ã2
(b) Ã10
Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category).
27
Published as a conference paper at ICLR 2017
# D Latent variables semantic
As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following ï¬gures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation.
Figure 22: Conceptual compression from a model trained on Imagenet (64 Ã 64). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept. | 1605.08803#47 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 48 | Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
28
Published as a conference paper at ICLR 2017
Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
29
Published as a conference paper at ICLR 2017 | 1605.08803#48 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 49 | Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
# E Batch normalization
We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics ˵t, ËÏ2
˵t+1 = Ï˵t + (1 â Ï)˵t t + (1 â Ï)ËÏ2 t+1 = ÏËÏ2 ËÏ2 t ,
(20)
(21)
where Ï is the momentum. When using ˵t+1, ËÏ2 statistics ˵t, ËÏ2 t+1, we only propagate gradient through the current batch t . We observe that using this lag helps the model train with very small minibatches.
We used batch normalization with a moving average for our results on CIFAR-10.
# F Attribute change | 1605.08803#49 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.08803 | 50 | We used batch normalization with a moving average for our results on CIFAR-10.
# F Attribute change
Additionally, we exploit the attribute information y in CelebA to build a conditional model, i.e. the invertible function f from image to latent variable uses the labels in y to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images x with their original attribute y and decode them using a new set of attributes yâ, build by shuffling the original attributes inside the batch. We obtain the new images xâ = g(f(a;y);yâ)We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background.
30
Published as a conference paper at ICLR 2017
~~ oe. Ge Di pei oe F ete 2 ab «> wp T ep ie pon oe o BOcr: ~ rN Pwe % =ees
Figure 27: Examples x from the CelebA dataset.
31
Published as a conference paper at ICLR 2017 | 1605.08803#50 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | [
{
"id": "1602.05110"
},
{
"id": "1602.05473"
},
{
"id": "1511.07122"
},
{
"id": "1511.01029"
},
{
"id": "1606.04934"
},
{
"id": "1511.06281"
},
{
"id": "1601.06759"
},
{
"id": "1502.03167"
},
{
"id": "1602.07714"
},
{
"id": "1602.07868"
},
{
"id": "1603.04467"
},
{
"id": "1603.08511"
},
{
"id": "1511.06349"
},
{
"id": "1506.03365"
},
{
"id": "1604.08772"
},
{
"id": "1511.05666"
},
{
"id": "1505.05770"
},
{
"id": "1511.06499"
},
{
"id": "1509.00519"
},
{
"id": "1511.06391"
},
{
"id": "1603.00748"
}
] |
1605.07725 | 0 | 1 2 0 2
# v o N 6 1
] L M . t a t s [
arXiv:1605.07725v4
4 v 5 2 7 7 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ADVERSARIAL TRAINING METHODS FOR SEMI-SUPERVISED TEXT CLASSIFICATION
Takeru Miyato1,2â, Andrew M Dai2, Ian Goodfellow3 [email protected], [email protected], [email protected] 1 Preferred Networks, Inc., ATR Cognitive Mechanisms Laboratories, Kyoto University 2 Google Brain 3 OpenAI
# ABSTRACT | 1605.07725#0 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 1 | # ABSTRACT
Adversarial training provides a means of regularizing supervised learning al- gorithms while virtual adversarial training is able to extend supervised learn- ing algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word rep- resentations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recur- rent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis show- ing that the learned word embeddings have improved in quality and that while training, the model is less prone to overï¬tting. Code is available at https://github.com/tensorï¬ow/models/tree/master/research/adversarial_text.
# INTRODUCTION | 1605.07725#1 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 2 | # INTRODUCTION
Adversarial examples are examples that are created by making small perturbations to the input de- signed to signiï¬cantly increase the loss incurred by a machine learning model (Szegedy et al., 2014; Goodfellow et al., 2015). Several models, including state of the art convolutional neural networks, lack the ability to classify adversarial examples correctly, sometimes even when the adversarial perturbation is constrained to be so small that a human observer cannot perceive it. Adversarial training is the process of training a model to correctly classify both unmodiï¬ed examples and ad- versarial examples. It improves not only robustness to adversarial examples, but also generalization performance for original examples. Adversarial training requires the use of labels when training models that use a supervised cost, because the label appears in the cost function that the adversarial perturbation is designed to maximize. Virtual adversarial training (Miyato et al., 2016) extends the idea of adversarial training to the semi-supervised regime and unlabeled examples. This is done by regularizing the model so that given an example, the model will produce the same output distribution as it produces on an adversarial perturbation of that example. Virtual adversarial training achieves good generalization performance for both supervised and semi-supervised learning tasks. | 1605.07725#2 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 3 | Previous work has primarily applied adversarial and virtual adversarial training to image classiï¬ca- tion tasks. In this work, we extend these techniques to text classiï¬cation tasks and sequence models. Adversarial perturbations typically consist of making small modiï¬cations to very many real-valued inputs. For text classiï¬cation, the input is discrete, and usually represented as a series of high- dimensional one-hot vectors. Because the set of high-dimensional one-hot vectors does not admit inï¬nitesimal perturbation, we deï¬ne the perturbation on continuous word embeddings instead of dis- crete word inputs. Traditional adversarial and virtual adversarial training can be interpreted both as a regularization strategy (Szegedy et al., 2014; Goodfellow et al., 2015; Miyato et al., 2016) and as de- fense against an adversary who can supply malicious inputs (Szegedy et al., 2014; Goodfellow et al., 2015). Since the perturbed embedding does not map to any word and the adversary presumably does not have access to the word embedding layer, our proposed training strategy is no longer intended as | 1605.07725#3 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 4 | âThis work was done when the author was at Google Brain.
1
Published as a conference paper at ICLR 2017
a defense against an adversary. We thus propose this approach exclusively as a means of regularizing a text classiï¬er by stabilizing the classiï¬cation function.
We show that our approach with neural language model unsupervised pretraining as proposed by Dai & Le (2015) achieves state of the art performance for multiple semi-supervised text clas- siï¬cation tasks, including sentiment classiï¬cation and topic classiï¬cation. We emphasize that opti- mization of only one additional hyperparameter Ç«, the norm constraint limiting the size of the adver- sarial perturbations, achieved such state of the art performance. These results strongly encourage the use of our proposed method for other text classiï¬cation tasks. We believe that text classiï¬ca- tion is an ideal setting for semi-supervised learning because there are abundant unlabeled corpora for semi-supervised learning algorithms to leverage. This work is the ï¬rst work we know of to use adversarial and virtual adversarial training to improve a text or RNN model. | 1605.07725#4 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 5 | We also analyzed the trained models to qualitatively characterize the effect of adversarial and vir- tual adversarial training. We found that adversarial and virtual adversarial training improved word embeddings over the baseline methods.
# 2 MODEL
We denote a sequence of T words as {w(t)|t = 1, . . . , T }, and a corresponding target as y. To transform a discrete word input to a continuous vector, we deï¬ne the word embedding matrix V â R(K+1)ÃD where K is the number of words in the vocabulary and each row vk corresponds to the word embedding of the i-th word. Note that the (K + 1)-th word embedding is used as an embedding of an âend of sequence (eos)â token, veos. As a text classiï¬cation model, we used a simple LSTM-based neural network model, shown in Figure 1a. At time step t, the input is the discrete word w(t), and the corresponding word embedding is v(t). We additionally tried the bidirectional | 1605.07725#5 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 6 | y LSTM LSTM v(1) v(2) v(3) veos ¯v(1) r(1) ¯v(2) r(2) ¯v(3) r(3) w(1) w(2) w(3) weos w(1) w(2) w(3) (a) LSTM-based text classiï¬cation model. (b) The model with perturbed embeddings. y veos weos
Figure 1: Text classiï¬cation models with clean embeddings (a) and with perturbed embeddings (b).
LSTM architecture (Graves & Schmidhuber, 2005) since this is used by the current state of the art method (Johnson & Zhang, 2016b). For constructing the bidirectional LSTM model for text classiï¬cation, we add an additional LSTM on the reversed sequence to the unidirectional LSTM model described in Figure 1. The model then predicts the label on the concatenated LSTM outputs of both ends of the sequence. | 1605.07725#6 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 7 | In adversarial and virtual adversarial training, we train the classiï¬er to be robust to perturbations of the embeddings, shown in Figure 1b. These perturbations are described in detail in Section 3. At present, it is sufï¬cient to understand that the perturbations are of bounded norm. The model could trivially learn to make the perturbations insigniï¬cant by learning embeddings with very large norm. To prevent this pathological solution, when we apply adversarial and virtual adversarial training to the model we deï¬ned above, we replace the embeddings vk with normalized embeddings ¯vk, deï¬ned as:
¯vk = vk â E(v) Var(v) where E(v) = K X j=1 fjvj, Var(v) = K X j=1 fj (vj â E(v))2 , (1) | 1605.07725#7 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 8 | # p
where fi is the frequency of the i-th word, calculated within all training examples.
2
Published as a conference paper at ICLR 2017
# 3 ADVERSARIAL AND VIRTUAL ADVERSARIAL TRAINING
Adversarial training (Goodfellow et al., 2015) is a novel regularization method for classiï¬ers to improve robustness to small, approximately worst case perturbations. Let us denote x as the input and θ as the parameters of a classiï¬er. When applied to a classiï¬er, adversarial training adds the following term to the cost function:
â log p(y | x + radv; θ) where radv = arg min r,krkâ¤Ç« log p(y | x + r; Ëθ) (2) | 1605.07725#8 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 9 | where r is a perturbation on the input and Ëθ is a constant set to the current parameters of a classiï¬er. The use of the constant copy Ëθ rather than θ indicates that the backpropagation algorithm should not be used to propagate gradients through the adversarial example construction process. At each step of training, we identify the worst case perturbations radv against the current model p(y|x; Ëθ) in Eq. (2), and train the model to be robust to such perturbations through minimizing Eq. (2) with respect to θ. However, we cannot calculate this value exactly in general, because exact minimization with respect to r is intractable for many interesting models such as neural networks. Goodfellow et al. (2015) proposed to approximate this value by linearizing log p(y | x; Ëθ) around x. With a linear approximation and a L2 norm constraint in Eq.(2), the resulting adversarial perturbation is
# radv = âÇ«g/kgk2 where g = âx log p(y | x; Ëθ).
This perturbation can be easily computed using backpropagation in neural networks. | 1605.07725#9 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 10 | This perturbation can be easily computed using backpropagation in neural networks.
Virtual adversarial training (Miyato et al., 2016) is a regularization method closely related to adver- sarial training. The additional cost introduced by virtual adversarial training is the following:
KL[p(· | x; Ëθ)||p(· | x + rv-adv; θ)] (3)
where rv-adv = arg max r,krkâ¤Ç« KL[p(· | x; Ëθ)||p(· | x + r; Ëθ)] (4) | 1605.07725#10 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 11 | where KL[p||q] denotes the KL divergence between distributions p and q. By minimizing Eq.(3), a classiï¬er is trained to be smooth. This can be considered as making the classiï¬er resistant to pertur- bations in directions to which it is most sensitive on the current model p(y|x; Ëθ). Virtual adversarial loss Eq.(3) requires only the input x and does not require the actual label y while adversarial loss deï¬ned in Eq.(2) requires the label y. This makes it possible to apply virtual adversarial training to semi-supervised learning. Although we also in general cannot analytically calculate the virtual adversarial loss, Miyato et al. (2016) proposed to calculate the approximated Eq.(3) efï¬ciently with backpropagation. | 1605.07725#11 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 12 | As described in Sec. 2, in our work, we apply the adversarial perturbation to word embeddings, rather than directly to the input. To deï¬ne adversarial perturbation on the word embeddings, let us denote a concatenation of a sequence of (normalized) word embedding vectors [¯v(1), ¯v(2), . . . , ¯v(T )] as s, and the model conditional probability of y given s as p(y|s; θ) where θ are model parameters. Then we deï¬ne the adversarial perturbation radv on s as:
radv = âÇ«g/kgk2 where g = âs log p(y | s; Ëθ). (5)
To be robust to the adversarial perturbation deï¬ned in Eq.(5), we deï¬ne the adversarial loss by
Ladv(θ) = â 1 N N X n=1 log p(yn | sn + radv,n; θ) (6) | 1605.07725#12 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 13 | where N is the number of labeled examples. minimizing the negative log-likelihood plus Ladv with stochastic gradient descent.
In virtual adversarial training on our text classiï¬cation model, at each training step, we calculate the below approximated virtual adversarial perturbation:
rv-adv = Ç«g/kgk2 where g = âs+dKL hp(· | s; Ëθ)||p(· | s + d; Ëθ)i (7)
3
Published as a conference paper at ICLR 2017
where d is a T D-dimensional small random vector. This approximation corresponds to a 2nd- order Taylor expansion and a single iteration of the power method on Eq.(3) as in previous work (Miyato et al., 2016). Then the virtual adversarial loss is deï¬ned as:
N â²
Lv-adv(θ) = 1 N â² X nâ²=1 KL hp(· | snâ² ; Ëθ)||p(· | snâ² + rv-adv,nâ²; θ)i (8) | 1605.07725#13 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 14 | where N â² is the number of both labeled and unlabeled examples.
See Warde-Farley & Goodfellow (2016) for a recent review of adversarial training methods.
# 4 EXPERIMENTAL SETTINGS
All experiments used TensorFlow (Abadi et al., 2016) on GPUs. To compare our method with other text classiï¬cation methods, we tested on 5 different text datasets. We summarize information about each dataset in Table 1. | 1605.07725#14 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 15 | IMDB (Maas et al., 2011)1 is a standard benchmark movie review dataset for sentiment classiï¬ca- tion. Elec (Johnson & Zhang, 2015b)2 3 is an Amazon electronic product review dataset. Rotten Tomatoes (Pang & Lee, 2005) consists of short snippets of movie reviews, for sentiment classiï¬- cation. The Rotten Tomatoes dataset does not come with separate test sets, thus we divided all examples randomly into 90% for the training set, and 10% for the test set. We repeated train- ing and evaluation ï¬ve times with different random seeds for the division. For the Rotten Toma- toes dataset, we also collected unlabeled examples using movie reviews from the Amazon Re- views dataset (McAuley & Leskovec, 2013) 4. DBpedia (Lehmann et al., 2015; Zhang et al., 2015) is a dataset of Wikipedia pages for category classiï¬cation. Because the DBpedia dataset has no additional unlabeled examples, the results on DBpedia are for the supervised learning task only. RCV1 (Lewis et al., 2004) consists of news articles from the Reuters Corpus. For the RCV1 | 1605.07725#15 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 16 | the results on DBpedia are for the supervised learning task only. RCV1 (Lewis et al., 2004) consists of news articles from the Reuters Corpus. For the RCV1 dataset, we followed previous works (Johnson & Zhang, 2015b) and we conducted a single topic classiï¬ca- tion task on the second level topics. We used the same division into training, test and unlabeled sets as Johnson & Zhang (2015b). Regarding pre-processing, we treated any punctuation as spaces. We converted all words to lower-case on the Rotten Tomatoes, DBpedia, and RCV1 datasets. We removed words which appear in only one document on all datasets. On RCV1, we also removed words in the English stop-words list provided by Lewis et al. (2004)5. | 1605.07725#16 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 17 | Table 1: Summary of datasets. Note that unlabeled examples for the Rotten Tomatoes dataset are not provided so we instead use the unlabeled Amazon reviews dataset.
Classes Train Test Unlabeled Avg. T Max T IMDB Elec Rotten Tomatoes DBpedia RCV1 2 2 2 14 55 25,000 24,792 9596 560,000 15,564 25,000 24,897 1066 70,000 49,838 50,000 197,025 7,911,684 â 668,640 239 110 20 49 153 2,506 5,123 54 953 9,852
4.1 RECURRENT LANGUAGE MODEL PRE-TRAINING
Following Dai & Le (2015), we initialized the word embedding matrix and LSTM weights with a pre-trained recurrent language model (Bengio et al., 2006; Mikolov et al., 2010) that was trained on
1http://ai.stanford.edu/~amaas/data/sentiment/ 2http://riejohnson.com/cnn_data.html 3There are some duplicated reviews in the original Elec dataset, and we used the dataset with removal of the duplicated reviews, provided by Johnson & Zhang (2015b), thus there are slightly fewer examples shown in Table 1 than the ones in previous works(Johnson & Zhang, 2015b; 2016b). | 1605.07725#17 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 18 | # 4http://snap.stanford.edu/data/web-Amazon.html 5http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/lyrl2004_rcv1v2_README.htm
4
Published as a conference paper at ICLR 2017
both labeled and unlabeled examples. We used a unidirectional single-layer LSTM with 1024 hidden units. The word embedding dimension D was 256 on IMDB and 512 on the other datasets. We used a sampled softmax loss with 1024 candidate samples for training. For the optimization, we used the Adam optimizer (Kingma & Ba, 2015), with batch size 256, an initial learning rate of 0.001, and a 0.9999 learning rate exponential decay factor at each training step. We trained for 100,000 steps. We applied gradient clipping with norm set to 1.0 on all the parameters except word embeddings. To reduce runtime on GPU, we used truncated backpropagation up to 400 words from each end of the sequence. For regularization of the recurrent language model, we applied dropout (Srivastava et al., 2014) on the word embedding layer with 0.5 dropout rate. | 1605.07725#18 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 19 | For the bidirectional LSTM model, we used 512 hidden units LSTM for both the standard order and reversed order sequences, and we used 256 dimensional word embeddings which are shared with both of the LSTMs. The other hyperparameters are the same as for the unidirectional LSTM. We tested the bidirectional LSTM model on IMDB, Elec and RCV because there are relatively long sentences in the datasets.
Pretraining with a recurrent language model was very effective on classiï¬cation performance on all the datasets we tested on and so our results in Section 5 are with this pretraining.
4.2 TRAINING CLASSIFICATION MODELS | 1605.07725#19 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 20 | After pre-training, we trained the text classiï¬cation model shown in Figure 1a with adversarial and virtual adversarial training as described in Section 3. Between the softmax layer for the target y and the ï¬nal output of the LSTM, we added a hidden layer, which has dimension 30 on IMDB, Elec and Rotten Tomatoes, and 128 on DBpedia and RCV1. The activation function on the hidden layer was ReLU(Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011). For optimization, we again used the Adam optimizer, with 0.0005 initial learning rate 0.9998 exponential decay. Batch sizes are 64 on IMDB, Elec, RCV1, and 128 on DBpedia. For the Rotten Tomatoes dataset, for each step, we take a batch of size 64 for calculating the loss of the negative log-likelihood and adversarial training, and 512 for calculating the loss of virtual adversarial training. Also for Rotten Tomatoes, we used texts with lengths T less than 25 in the unlabeled dataset. We iterated 10,000 training steps on all datasets except IMDB and DBpedia, for which we used 15,000 and 20,000 training steps respectively. We again | 1605.07725#20 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 21 | We iterated 10,000 training steps on all datasets except IMDB and DBpedia, for which we used 15,000 and 20,000 training steps respectively. We again applied gradient clipping with the norm as 1.0 on all the parameters except the word embedding. We also used truncated backpropagation up to 400 words, and also generated the adversarial and virtual adversarial perturbation up to 400 words from each end of the sequence. | 1605.07725#21 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 22 | We found the bidirectional LSTM to converge more slowly, so we iterated for 15,000 training steps when training the bidirectional LSTM classiï¬cation model.
For each dataset, we divided the original training set into training set and validation set, and we roughly optimized some hyperparameters shared with all of the methods; (model architecture, batch- size, training steps) with the validation performance of the base model with embedding dropout. For each method, we optimized two scalar hyperparameters with the validation set. These were the dropout rate on the embeddings and the norm constraint Ç« of adversarial and virtual adversarial training. Note that for adversarial and virtual adversarial training, we generate the perturbation after applying embedding dropout, which we found performed the best. We did not do early stopping with these methods. The method with only pretraining and embedding dropout is used as the baseline (referred to as Baseline in each table).
5 RESULTS
5.1 TEST PERFORMANCE ON IMDB DATASET AND MODEL ANALYSIS | 1605.07725#22 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 23 | 5 RESULTS
5.1 TEST PERFORMANCE ON IMDB DATASET AND MODEL ANALYSIS
Figure 2 shows the learning curves on the IMDB test set with the baseline method (only embedding dropout and pretraining), adversarial training, and virtual adversarial training. We can see in Fig- ure 2a that adversarial and virtual adversarial training achieved lower negative log likelihood than the baseline. Furthermore, virtual adversarial training, which can utilize unlabeled data, maintained this low negative log-likelihood while the other methods began to overï¬t later in training. Regarding adversarial and virtual adversarial loss in Figure 2b and 2c, we can see the same tendency as for negative log likelihood; virtual adversarial training was able to keep these values lower than other
5
Published as a conference paper at ICLR 2017
methods. Because adversarial training operates only on the labeled subset of the training data, it eventually overï¬ts even the task of resisting adversarial perturbations. | 1605.07725#23 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 24 | d o o h i l e k i l g o l e v i t a g e n t s e T 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 1000 2000 Baseline Adversarial Virtual adversarial 3000 4000 5000 s s o l l a i r a s r e v d a t s e T 2.5 2.0 1.5 1.0 0.5 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 5000 s s o l l a i r a s r e v d a l a u t r i v t s e T 1.0 0.8 0.6 0.4 0.2 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 Step Step Step (a) Negative log likelihood (b) Ladv(θ) (c) Lv-adv(θ) 5000 | 1605.07725#24 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 25 | Figure 2: Learning curves of (a) negative log likelihood, (b) adversarial loss (deï¬ned in Eq.(6)) and (c) virtual adversarial loss (deï¬ned in Eq.(8)) on IMDB. All values were evaluated on the test set. Adversarial and virtual adversarial loss were evaluated with Ç« = 5.0. The optimal value of Ç« differs between adversarial training and virtual adversarial training, but the value of 5.0 performs very well for both and provides a consistent point of comparison. | 1605.07725#25 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 26 | Table 2 shows the test performance on IMDB with each training method. âAdversarial + Virtual Ad- versarialâ means the method with both adversarial and virtual adversarial loss with the shared norm constraint Ç«. With only embedding dropout, our model achieved a 7.39% error rate. Adversarial and virtual adversarial training improved the performance relative to our baseline, and virtual adversarial training achieved performance on par with the state of the art, 5.91% error rate. This is despite the fact that the state of the art model requires training a bidirectional LSTM whereas our model only uses a unidirectional LSTM. We also show results with a bidirectional LSTM. Our bidirectional LSTM model has the same performance as a unidirectional LSTM with virtual adversarial training. | 1605.07725#26 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 27 | A common misconception is that adversarial training is equivalent to training on noisy examples. Noise is actually a far weaker regularizer than adversarial perturbations because, in high dimensional input spaces, an average noise vector is approximately orthogonal to the cost gradient. Adversarial perturbations are explicitly chosen to consistently increase the cost. To demonstrate the superiority of adversarial training over the addition of noise, we include control experiments which replaced adversarial perturbations with random perturbations from a multivariate Gaussian with scaled norm, on each embedding in the sequence. In Table 2, âRandom perturbation with labeled examplesâ is the method in which we replace radv with random perturbations, and âRandom perturbation with labeled and unlabeled examplesâ is the method in which we replace rv-adv with random perturbations. Every adversarial training method outperformed every random perturbation method. | 1605.07725#27 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 28 | To visualize the effect of adversarial and virtual adversarial training on embeddings, we examined embeddings trained using each method. Table 3 shows the 10 top nearest neighbors to âgoodâ and âbadâ with trained embeddings. The baseline and random methods are both strongly inï¬uenced by the grammatical structure of language, due to the language model pretraining step, but are not strongly inï¬uenced by the semantics of the text classiï¬cation task. For example, âbadâ appears in the list of nearest neighbors to âgoodâ on the baseline and the random perturbation method. Both âbadâ and âgoodâ are adjectives that can modify the same set of nouns, so it is reasonable for a language model to assign them similar embeddings, but this clearly does not convey much information about the actual meaning of the words. Adversarial training ensures that the meaning of a sentence cannot be inverted via a small change, so these words with similar grammatical role but different meaning become separated. When using adversarial and virtual adversarial training, âbadâ no longer appears in the 10 top nearest neighbors to âgoodâ. | 1605.07725#28 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 29 | When using adversarial and virtual adversarial training, âbadâ no longer appears in the 10 top nearest neighbors to âgoodâ. âbadâ falls to the 19th nearest neighbor for adversarial training and 21st nearest neighbor for virtual adversarial training, with cosine distances of 0.463 and 0.464, respectively. For the baseline and random perturbation method, the cosine distances were 0.361 and 0.377, respectively. In the other direction, the nearest neighbors to âbadâ included âgoodâ as the 4th nearest neighbor for the baseline method and random perturbation method. For both adversarial methods, âgoodâ drops to the 36th nearest neighbor of âbadâ. | 1605.07725#29 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 30 | We also investigated the 15 nearest neighbors to âgreatâ and its cosine distances with the trained embeddings. We saw that cosine distance on adversarial and virtual adversarial training (0.159â 0.331) were much smaller than ones on the baseline and random perturbation method (0.244â0.399).
6
Published as a conference paper at ICLR 2017
Table 2: Test performance on the IMDB sentiment classiï¬cation task. * indicates using pretrained embeddings of CNN and bidirectional LSTM. | 1605.07725#30 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 31 | Table 2: Test performance on the IMDB sentiment classiï¬cation task. * indicates using pretrained embeddings of CNN and bidirectional LSTM.
Method Test error rate Baseline (without embedding normalization) 7.33% Baseline Random perturbation with labeled examples Random perturbation with labeled and unlabeled examples Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 7.39% 7.20% 6.78% 6.21% 5.91% 6.09% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.91% 6.02% Full+Unlabeled+BoW (Maas et al., 2011) Transductive SVM (Johnson & Zhang, 2015b) NBSVM-bigrams (Wang & Manning, 2012) Paragraph Vectors (Le & Mikolov, 2014) SA-LSTM (Dai & Le, 2015) One-hot bi-LSTM* (Johnson & Zhang, 2016b) 11.11% 9.99% 8.78% 7.42% 7.24% 5.94% | 1605.07725#31 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 32 | Table 3: 10 top nearest neighbors to âgoodâ and âbadâ with the word embeddings trained on each method. We used cosine distance for the metric. âBaselineâ means training with embedding dropout and âRandomâ means training with random perturbation with labeled examples. âAdversarialâ and âVirtual Adversarialâ mean adversarial training and virtual adversarial training.
âgoodâ âbadâ Baseline Random Adversarial Virtual Adversarial Baseline Random Adversarial Virtual Adversarial 1 2 3 4 5 6 7 8 9 10 great decent Ãbad excellent Good ï¬ne nice interesting solid entertaining great decent excellent nice Good Ãbad ï¬ne interesting entertaining solid decent great nice ï¬ne entertaining interesting Good excellent solid cool decent great nice ï¬ne entertaining interesting Good cool enjoyable excellent terrible awful horrible Ãgood Bad BAD poor stupid Horrible horrendous terrible awful horrible Ãgood poor BAD Bad stupid Horrible horrendous terrible awful horrible poor BAD stupid Bad laughable lame Horrible terrible awful horrible poor BAD stupid Bad laughable lame Horrible
The much weaker positive word âgoodâ also moved from the 3rd nearest neighbor to the 15th after virtual adversarial training.
5.2 TEST PERFORMANCE ON ELEC, RCV1 AND ROTTEN TOMATOES DATASET | 1605.07725#32 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 33 | 5.2 TEST PERFORMANCE ON ELEC, RCV1 AND ROTTEN TOMATOES DATASET
Table 4 shows the test performance on the Elec and RCV1 datasets. We can see our proposed method improved test performance on the baseline method and achieved state of the art performance on both datasets, even though the state of the art method uses a combination of CNN and bidirectional LSTM models. Our unidirectional LSTM model improves on the state of the art method and our method with a bidirectional LSTM further improves results on RCV1. The reason why the bidirectional models have better performance on the RCV1 dataset would be that, on the RCV1 dataset, there are some very long sentences compared with the other datasets, and the bidirectional model could better handle such long sentences with the shorter dependencies from the reverse order sentences.
Table 5 shows test performance on the Rotten Tomatoes dataset. Adversarial training was able to improve over the baseline method, and with both adversarial and virtual adversarial cost, achieved almost the same performance as the current state of the art method. However the test performance of only virtual adversarial training was worse than the baseline. We speculate that this is because the Rotten Tomatoes dataset has very few labeled sentences and the labeled sentences are very short.
7
Published as a conference paper at ICLR 2017 | 1605.07725#33 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 34 | 7
Published as a conference paper at ICLR 2017
Table 4: Test performance on the Elec and RCV1 classiï¬cation tasks. * indicates using pretrained embeddings of CNN, and â indicates using pretrained embeddings of CNN and bidirectional LSTM.
Method Test error rate Elec RCV1 Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 6.24% 5.61% 5.54% 5.40% 7.40% 7.12% 7.05% 6.97% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.55% 5.45% 6.71% 6.68% Transductive SVM (Johnson & Zhang, 2015b) NBLM (Naıve Bayes logisitic regression model) (Johnson & Zhang, 2015a) One-hot CNN* (Johnson & Zhang, 2015b) One-hot CNNâ (Johnson & Zhang, 2016b) One-hot bi-LSTMâ (Johnson & Zhang, 2016b) 16.41% 10.77% 8.11% 13.97% 7.71% 6.27% 7.15% 5.87% 8.52% 5.55% | 1605.07725#34 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 35 | In this case, the virtual adversarial loss on unlabeled examples overwhelmed the supervised loss, so the model prioritized being robust to perturbation rather than obtaining the correct answer.
Table 5: Test performance on the Rotten Tomatoes sentiment classiï¬cation task. * indicates using pretrained embeddings from word2vec Google News, and â indicates using unlabeled data from Amazon reviews.
Method Test error rate Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 17.9% 16.8% 19.1% 16.6% NBSVM-bigrams(Wang & Manning, 2012) CNN*(Kim, 2014) AdaSent*(Zhao et al., 2015) SA-LSTMâ (Dai & Le, 2015) 20.6% 18.5% 16.9% 16.7%
5.3 PERFORMANCE ON THE DBPEDIA PURELY SUPERVISED CLASSIFICATION TASK | 1605.07725#35 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 36 | 5.3 PERFORMANCE ON THE DBPEDIA PURELY SUPERVISED CLASSIFICATION TASK
Table 6 shows the test performance of each method on DBpedia. The âRandom perturbationâ is the same method as the âRandom perturbation with labeled examplesâ explained in Section 5.1. Note that DBpedia has only labeled examples, as we explained in Section 4, so this task is purely supervised learning. We can see that the baseline method has already achieved nearly the current state of the art performance, and our proposed method improves from the baseline method.
# 6 RELATED WORKS
Dropout (Srivastava et al., 2014) is a regularization method widely used for many domains includ- ing text. There are some previous works adding random noise to the input and hidden layer during training, to prevent overï¬tting (e.g. (Sietsma & Dow, 1991; Poole et al., 2013)). However, in our experiments and in previous works (Miyato et al., 2016), training with adversarial and virtual adver- sarial perturbations outperformed the method with random perturbations. | 1605.07725#36 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 37 | For semi-supervised learning with neural networks, a common approach, especially in the image domain, is to train a generative model whose latent features may be used as features for classiï¬- cation (e.g. (Hinton et al., 2006; Maaløe et al., 2016)). These models now achieve state of the art
8
Published as a conference paper at ICLR 2017
Table 6: Test performance on the DBpedia topic classiï¬cation task
Method Test error rate Baseline (without embedding normalization) 0.87% Baseline Random perturbation Adversarial Virtual Adversarial 0.90% 0.85% 0.79% 0.76% Bag-of-words(Zhang et al., 2015) Large-CNN(character-level) (Zhang et al., 2015) SA-LSTM(word-level)(Dai & Le, 2015) N-grams TFIDF (Zhang et al., 2015) SA-LSTM(character-level)(Dai & Le, 2015) Word CNN (Johnson & Zhang, 2016a) 3.57% 1.73% 1.41% 1.31% 1.19% 0.84% | 1605.07725#37 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 38 | performance on the image domain. However, these methods require numerous additional hyperpa- rameters with generative models, and the conditions under which the generative model will provide good supervised learning performance are poorly understood. By comparison, adversarial and vir- tual adversarial training requires only one hyperparameter, and has a straightforward interpretation as robust optimization.
Adversarial and virtual adversarial training resemble some semi-supervised or transductive SVM ap- proaches (Joachims, 1999; Chapelle & Zien, 2005; Collobert et al., 2006; Belkin et al., 2006) in that both families of methods push the decision boundary far from training examples (or in the case of transductive SVMs, test examples). However, adversarial training methods insist on margins on the input space , while SVMs insist on margins on the feature space deï¬ned by the kernel function. This property allows adversarial training methods to achieve the models with a more ï¬exible function on the space where the margins are imposed. In our experiments (Table 2, 4) and Miyato et al. (2016), adversarial and virtual adversarial training achieve better performance than SVM based methods. | 1605.07725#38 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 39 | There has also been semi-supervised approaches applied to text classiï¬cation with both CNNs and RNNs. These approaches utilize âview-embeddingsâ(Johnson & Zhang, 2015b; 2016b) which use the window around a word to generate its embedding. When these are used as a pretrained model for the classiï¬cation model, they are found to improve generalization performance. These methods and our method are complementary as we showed that our method improved from a recurrent pretrained language model.
# 7 CONCLUSION
In our experiments, we found that adversarial and virtual adversarial training have good regular- ization performance in sequence models on text classiï¬cation tasks. On all datasets, our proposed method exceeded or was on par with the state of the art performance. We also found that adversarial and virtual adversarial training improved not only classiï¬cation performance but also the quality of word embeddings. These results suggest that our proposed method is promising for other text do- main tasks, such as machine translation(Sutskever et al., 2014), learning distributed representations of words or paragraphs(Mikolov et al., 2013; Le & Mikolov, 2014) and question answering tasks. Our approach could also be used for other general sequential tasks, such as for video or speech.
ACKNOWLEDGMENTS | 1605.07725#39 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 40 | ACKNOWLEDGMENTS
We thank the developers of Tensorï¬ow. We thank the members of Google Brain team for their warm support and valuable comments. This work is partly supported by NEDO.
# REFERENCES
Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heteroge9
Published as a conference paper at ICLR 2017
neous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7(Nov):2399â 2434, 2006.
Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137â186. Springer, 2006.
Olivier Chapelle and Alexander Zien. Semi-supervised classiï¬cation by low density separation. In AISTATS, 2005. | 1605.07725#40 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 41 | Olivier Chapelle and Alexander Zien. Semi-supervised classiï¬cation by low density separation. In AISTATS, 2005.
Ronan Collobert, Fabian Sinz, Jason Weston, and Léon Bottou. Large scale transductive svms. Journal of Machine Learning Research, 7(Aug):1687â1712, 2006.
Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPS, 2015.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬er neural networks. In AISTATS, 2011.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
Alex Graves and Jürgen Schmidhuber. Framewise phoneme classiï¬cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â610, 2005.
Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527â1554, 2006. | 1605.07725#41 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 42 | Kevin Jarrett, Koray Kavukcuoglu, MarcâAurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
Thorsten Joachims. Transductive inference for text classiï¬cation using support vector machines. 1999. In ICML,
Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. NAACL HLT, 2015a.
Rie Johnson and Tong Zhang. Semi-supervised convolutional neural networks for text categorization via region embedding. In NIPS, 2015b.
Rie Johnson and Tong Zhang. Convolutional neural networks for text categorization: Shallow word-level vs. deep character-level. arXiv preprint arXiv:1609.00718, 2016a.
Rie Johnson and Tong Zhang. Supervised and semi-supervised text categorization using LSTM for region embeddings. In ICML, 2016b.
Yoon Kim. Convolutional neural networks for sentence classiï¬cation. In EMNLP, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014. | 1605.07725#42 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 43 | Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, et al. Dbpediaâa large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167â195, 2015.
David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text catego- rization research. The Journal of Machine Learning Research, 5:361â397, 2004.
Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In ICML, 2016.
Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL: Human Language Technologies-Volume 1, 2011.
Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM conference on Recommender systems, 2013. | 1605.07725#43 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 44 | Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM conference on Recommender systems, 2013.
Tomas Mikolov, Martin Karaï¬Ã¡t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
10
Published as a conference paper at ICLR 2017
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, 2005.
Ben Poole, Jascha Sohl-Dickstein, and Surya Ganguli. Analyzing noise in autoencoders and deep networks. In Deep Leanring Workshop on NIPS, 2013. | 1605.07725#44 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07725 | 45 | J. Sietsma and R. Dow. Creating artiï¬cial neural networks that generalize. Neural Networks, 4(1), 1991.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1), 2014.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014.
Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL: Short Papers, 2012.
David Warde-Farley and Ian Goodfellow. Adversarial perturbations of deep neural networks. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbations, Optimization, and Statistics, chapter 11. 2016. Book in preparation for MIT Press. | 1605.07725#45 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | [
{
"id": "1603.04467"
},
{
"id": "1605.07725"
},
{
"id": "1609.00718"
}
] |
1605.07678 | 0 | 7 1 0 2
r p A 4 1 ] V C . s c [
4 v 8 7 6 7 0 . 5 0 6 1 : v i X r a
AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL APPLICATIONS
Alfredo Canziani & Eugenio Culurciello Weldon School of Biomedical Engineering Purdue University {canziani,euge}@purdue.edu
# Adam Paszke Faculty of Mathematics, Informatics and Mechanics University of Warsaw [email protected]
# Alfredo Canziani & Eugenio Culurciello Adam Paszke
# ABSTRACT | 1605.07678#0 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07427 | 1 | 1 Université de Montréal, Canada. 2 Twitter Cortex, USA. 3 IBM Watson Research Center, USA. 4 CIFAR, Canada.
# Abstract
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a ï¬at memory, while also being easier to train than hard attention over a ï¬at memory. Speciï¬cally, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
# 1 Introduction | 1605.07427#1 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 1 | # Alfredo Canziani & Eugenio Culurciello Adam Paszke
# ABSTRACT
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the ï¬eld of computer vision, the ImageNet classiï¬cation challenge has played a major role in advancing the state-of-the-art. While accuracy ï¬gures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important met- rics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key ï¬ndings are: (1) power con- sumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of oper- ations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efï¬cient DNNs.
1
# 1 INTRODUCTION | 1605.07678#1 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 1 | Antoine Bordes, Y-Lan Boureau & Jason Weston Facebook AI Research New York, USA {abordes, ylan, jase}@fb.com
# ABSTRACT
Traditional dialog systems used in goal-oriented applications require a lot of domain-speciï¬c handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols in order to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We conï¬rm those results by comparing our system to a hand-crafted slot-ï¬lling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
# INTRODUCTION | 1605.07683#1 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 2 | # 1 Introduction
Until recently, traditional machine learning approaches for challenging tasks such as image captioning, object detection, or machine translation have consisted in complex pipelines of algorithms, each being separately tuned for better performance. With the recent success of neural networks and deep learning research, it has now become possible to train a single model end-to-end, using backpropagation. Such end-to-end systems often outperform traditional approaches, since the entire model is directly optimized with respect to the ï¬nal task at hand. However, simple encode-decode style neural networks often underperform on knowledge-based reasoning tasks like question-answering or dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to store all the necessary knowledge in their parameters. | 1605.07427#2 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 2 | 1
# 1 INTRODUCTION
Since the breakthrough in 2012 ImageNet competition (Russakovsky et al., 2015) achieved by AlexNet (Krizhevsky et al., 2012) â the ï¬rst entry that used a Deep Neural Network (DNN) â several other DNNs with increasing complexity have been submitted to the challenge in order to achieve better performance.
In the ImageNet classiï¬cation challenge, the ultimate goal is to obtain the highest accuracy in a multi-class classiï¬cation problem framework, regardless of the actual inference time. We believe that this has given rise to several problems. Firstly, it is now normal practice to run several trained instances of a given model over multiple similar instances of each validation image. This practice, also know as model averaging or ensemble of DNNs, dramatically increases the amount of com- putation required at inference time to achieve the published accuracy. Secondly, model selection is hindered by the fact that different submissions are evaluating their (ensemble of) models a different number of times on the validation images, and therefore the reported accuracy is biased on the spe- ciï¬c sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding up inference time, which is a key element in practical applications of these models, and affects resource utilisation, power-consumption, and latency. | 1605.07678#2 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 2 | # INTRODUCTION
The most useful applications of dialog systems such as digital personal assistants or bots are currently goal-oriented and transactional: the system needs to understand a user request and complete a related task with a clear goal within a limited number of dialog turns. The workhorse of traditional dialog systems is slot-ï¬lling (Lemon et al., 2006; Wang and Lemon, 2013; Young et al., 2013) which predeï¬nes the structure of a dialog state as a set of slots to be ï¬lled during the dialog. For a restaurant reservation system, such slots can be the location, price range or type of cuisine of a restaurant. Slot-ï¬lling has proven reliable but is inherently hard to scale to new domains: it is impossible to manually encode all features and slots that users might refer to in a conversation. | 1605.07683#2 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 3 | Neural networks with memory [1, 2] can deal with knowledge bases by having an external memory component which can be used to explicitly store knowledge. The memory is accessed by reader and writer functions, which are both made differentiable so that the entire architecture (neural network, reader, writer and memory components) can be trained end-to-end using backpropagation. Memory-based architectures can also be considered as generalizations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. However they are much richer in structure and can handle very long-term dependencies because once a vector (i.e., a memory) is stored, it is copied
âCorresponding author: [email protected]
from time step to time step and can thus stay there for a very long time (and gradients correspondingly ï¬ow back time unhampered).
There exists several variants of neural networks with a memory component: Memory Networks [2], Neural Turing Machines (NTM) [1], Dynamic Memory Networks (DMN) [3]. They all share ï¬ve major components: memory, input module, reader, writer, and output module. | 1605.07427#3 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 3 | This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal- lenge over the last 4 years, in terms of computational requirements and accuracy. We compare these architectures on multiple metrics related to resource utilisation in actual deployments: accuracy, memory footprint, parameters, operations count, inference time and power consumption. The pur- pose of this paper is to stress the importance of these ï¬gures, which are essential hard constraints for the optimisation of these networks in practical deployments and applications.
# 2 METHODS
In order to compare the quality of different models, we collected and analysed the accuracy values reported in the literature. We immediately found that different sampling techniques do not allow for a direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a
1
Top-1 accuracy [%] ll ye yt KAP rox BR 59 40> 45h NP ne ens we oS eS REY Oe A NES AT EHO⢠BeePRee® | 1605.07678#3 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 3 | End-to-end dialog systems, usually based on neural networks (Shang et al., 2015; Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2015a; Dodge et al., 2016), escape such limitations: all their components are directly trained on past dialogs, with no assumption on the domain or dialog state structure, thus making it easy to automatically scale up to new domains. They have shown promising performance in non goal-oriented chit-chat settings, where they were trained to predict the next utterance in social media and forum threads (Ritter et al., 2011; Wang et al., 2013; Lowe et al., 2015) or movie conversations (Banchs, 2012). But the performance achieved on chit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure 1 in a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyond language modeling, e.g., asking questions to clearly deï¬ne a user request, querying Knowledge Bases (KBs), interpreting results from queries to display options to users or completing a transaction. This makes it hard to ascertain how well end-to-end dialog models would do, | 1605.07683#3 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 4 | Memory: The memory is an array of cells, each capable of storing a vector. The memory is often initialized with external data (e.g. a database of facts), by ï¬lling in its cells with a pre-trained vector representations of that data.
Input module: The input module is to compute a representation of the input that can be used by other modules.
Writer: The writer takes the input representation and updates the memory based on it. The writer can be as simple as ï¬lling the slots in the memory with input vectors in a sequential way (as often done in memory networks). If the memory is bounded, instead of sequential writing, the writer has to decide where to write and when to rewrite cells (as often done in NTMs).
Reader: Given an input and the current state of the memory, the reader retrieves content from the memory, which will then be used by an output module. This often requires comparing the inputâs representation or a function of the recurrent state with memory cells using some scoring function such as a dot product.
Output module: Given the content retrieved by the reader, the output module generates a prediction, which often takes the form of a conditional distribution over multiple labels for the output. | 1605.07427#4 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 4 | Inception-v4 Inceptionv3 e ResNet-50 ResNet-101 oe ResNet-34 ResNet-18 9" GcogLenet ENet ResNet-152 VGG-16 VGG-19 accuracy © svn 60 5M. 35M. 65M. 95M BN-AlexNet 55 AlexNet 125M..155M 50 0 5 vo 15 20 25 30035 40 Operations [G-Ops]
[%]
# Top-1
Figure 1: Top1 vs. network. Single-crop top-1 vali- dation accuracies for top scoring single-model archi- tectures. We introduce with this chart our choice of colour scheme, which will be used throughout this publication to distinguish effectively different archi- tectures and their correspondent authors. Notice that networks of the same group share the same hue, for example ResNet are all variations of pink.
Figure 2: Top1 vs. operations, size â parameters. Top-1 one-crop accuracy versus amount of operations required for a single forward pass. The size of the blobs is proportional to the number of network pa- rameters; a legend is reported in the bottom right cor- ner, spanning from 5Ã106 to 155Ã106 params. Both these ï¬gures share the same y-axis, and the grey dots highlight the centre of the blobs. | 1605.07678#4 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 4 | interpreting results from queries to display options to users or completing a transaction. This makes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating chit-chat performance in itself is not straightforward (Liu et al., 2016). In particular, it is unclear if end-to-end models are in a position to replace traditional dialog methods in a goal-directed setting: can end-to-end dialog models be competitive with traditional methods even in the well-deï¬ned narrow-domain tasks where they excel? If not, where do they fall short? | 1605.07683#4 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 5 | Output module: Given the content retrieved by the reader, the output module generates a prediction, which often takes the form of a conditional distribution over multiple labels for the output.
For the rest of the paper, we will use the name memory network to describe any model which has any form of these ï¬ve components. We would like to highlight that all the components except the memory are learnable. Depending on the application, any of these components can also be ï¬xed. In this paper, we will focus on the situation where a network does not write and only reads from the memory. | 1605.07427#5 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 5 | single run of VGG-161 (Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al., 2014) are 8.70% and 10.07% respectively, revealing that VGG-16 performs better than GoogLeNet. When models are run with 10-crop sampling,2 then the errors become 9.33% and 9.15% respectively, and therefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason, we decided to base our analysis on re-evaluations of top-1 accuracies3 for all networks with a single central-crop sampling technique (Zagoruyko, 2016). | 1605.07678#5 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 5 | This paper aims to make it easier to address these questions by proposing an open resource to test end- to-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight and easy to use. We aim to break down a goal-directed objective into several subtasks to test some crucial capabilities that dialog systems should have (and hence provide error analysis by design).
1
Published as a conference paper at ICLR 2017 | 1605.07683#5 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 6 | In this paper, we focus on the application of memory networks to large-scale tasks. Speciï¬cally, we focus on large scale factoid question answering. For this problem, given a large set of facts and a natural language question, the goal of the system is to answer the question by retrieving the supporting fact for that question, from which the answer can be derived. Application of memory networks to this task has been studied in [4]. However, [4] depended on keyword based heuristics to ï¬lter the facts to a smaller set which is manageable for training. However heuristics are invariably dataset dependent and we are interested in a more general solution which can be used when the facts are of any structure. One can design soft attention retrieval mechanisms, where a convex combination of all the cells is retrieved or design hard attention retrieval mechanisms where one or few cells from the memory are retrieved. Soft attention is achieved by using softmax over the memory which makes the reader differentiable and hence learning can be done using gradient descent. Hard attention is achieved by using methods like REINFORCE [5], which provides a noisy gradient estimate when discrete stochastic decisions are made by a model. | 1605.07427#6 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 6 | For inference time and memory usage measurements we have used Torch7 with cuDNN-v5 and CUDA-V8 back-end. All experiments were conducted on a JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system with a 64-bit ARM@®) A57 CPU, a | T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4 of shared RAM. We use this resource-limited device to better underline the differences between network architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIA K40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that we developed (Paszke| {2016). For measuring the power consumption, a Keysight 1146B Hall effect current probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with a sampling period of 2s and 50kSa/s sample rate. The system was powered by a Keysight E3645A GPIB controlled DC power supply.
# 3 RESULTS | 1605.07678#6 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 6 | Hi! Hello, what can I help you with today? T'd like to book a table for six people in an expensive price range with British food. Task 1 Th it! - Wise should it be? Issuing API calls Ok, let me look into some as fr acy Actually i would prefer for four. Sure. Is there anything else to update? Task 2 No. Updating API calls Ok let me look into some options for you. In London. The Place R_phone The Place phone The Place R cuisine british The Place R_address The Place address The Place R location london The Place R availability four The Place R price expensive The Place R_rating 7 The Fancy Pub R phone The Fancy Pub phone âThe Fancy Pub R_cuisine british The Fancy Pub R address The Fancy Pub_address âThe_Fancy Pub R location london Task 3 The Fancy Pub R availability four : . . Tenaya Reales Cpe Displaying options * The Fancy Pub R-rating 8 What do you think of this option: The_Fancy_Pub No, I don't like that. Sure, let me find an other option for you. What do you think of this option: The Place Let's do it! Great let me do the reservation. Can you provide me the address? Here it is: The_Place_address. You rock! Is there | 1605.07683#6 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 7 | Both soft attention and hard attention have limitations. As the size of the memory grows, soft attention using softmax weighting is not scalable. It is computationally very expensive, since its complexity is linear in the size of the memory. Also, at initialization, gradients are dispersed so much that it can reduce the effectiveness of gradient descent. These problems can be alleviated by a hard attention mechanism, for which the training method of choice is REINFORCE. However, REINFORCE can be brittle due to its high variance and existing variance reduction techniques are complex. Thus, it is rarely used in memory networks (even in cases of a small memory).
In this paper, we propose a new memory selection mechanism based on Maximum Inner Product Search (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of soft and hard attention mechanisms. The key idea is to structure the memory in a hierarchical way such that it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs are scalable at both training and inference time. The main contributions of the paper are as follows:
⢠We explore hierarchical memory networks, where the memory is organized in a hierarchical fashion, which allows the reader to efï¬ciently access only a subset of the memory. | 1605.07427#7 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 7 | # 3 RESULTS
In this section we report our results and comparisons. We analysed the following DDNs: AlexNet (Krizhevsky et al., 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised Network In Network (NIN) (Lin et al., 2013), ENet (Paszke et al., 2016) for ImageNet (Culurciello, 2016), GoogLeNet (Szegedy et al., 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18, -34, -50, -101 and -152 (He et al., 2015), Inception-v3 (Szegedy et al., 2015) and Inception-v4 (Szegedy et al., 2016) since they obtained the highest performance, in these four years, on the ImageNet (Russakovsky et al., 2015) challenge.
1 In the original paper this network is called VGG-D, which is the best performing network. Here we prefer to highlight the number of layer utilised, so we will call it VGG-16 in this publication.
2 From a given image multiple patches are extracted: four corners plus central crop and their horizontal
mirrored twins.
3 Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.
2 | 1605.07678#7 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07427 | 8 | ⢠We explore hierarchical memory networks, where the memory is organized in a hierarchical fashion, which allows the reader to efï¬ciently access only a subset of the memory.
⢠While there are several ways to decide which subset to access, we propose to pose memory access as a maximum inner product search (MIPS) problem.
2
⢠We empirically show that exact MIPS-based algorithms not only enjoy similar convergence as soft attention models, but can even improve the performance of the memory network.
⢠Since exact MIPS is as computationally expensive as a full soft attention model, we propose to train the memory networks using approximate MIPS techniques for scalable memory access.
⢠We empirically show that unlike exact MIPS, approximate MIPS algorithms provide a speedup and scalability of training, though at the cost of some performance.
# 2 Hierarchical Memory Networks
In this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper, HMNs only differ from regular memory networks in two of its components: the memory and the reader. | 1605.07427#8 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 8 | mirrored twins.
3 Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.
2
200 i â BNNIN â â GoogLeNet â Inception-v3 Inception-v4 â AlexNet â BN-AlexNet â vec-16 50 â vecis ResNet-152 â ENet Foward time per image [ms] II ae az 10 Batch size [/]
power consumption ââ_ â__ BN-AlexNet â ResNet-50 VGG-16 ResNet-101 VGG-19 ResNet-152 ResNet18 = = ENet â BNNIN â GoogLenet 9 â Inception-v3 Inception-v4 Batch size [/]
[W]
# Net
Inference time vs. batch size. This Figure 3: chart show inference time across different batch sizes with a logarithmic ordinate and logarithmic abscissa. Missing data points are due to lack of enough system memory required to process larger batches. A speed up of 3Ã is achieved by AlexNet due to better optimi- sation of its fully connected layers for larger batches. | 1605.07678#8 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 8 | Figure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options (sorted by rating) and to provide extra-information. Task 5 combines everything. | 1605.07683#8 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 9 | Memory: Instead of a ï¬at array of cells for the memory structure, HMNs leverages a hierarchical memory structure. Memory cells are organized into groups and the groups can further be organized into higher level groups. The choice for the memory structure is tightly coupled with the choice of reader, which is essential for fast memory access. We consider three classes of approaches for the memoryâs structure: hashing-based approaches, tree-based approaches, and clustering-based approaches. This is explained in detail in the next section.
Reader: The reader in the HMN is different from the readers in ï¬at memory networks. Flat memory- based readers use either soft attention over the entire memory or hard attention that retrieves a single cell. While these mechanisms might work with small memories, with HMNs we are more interested in achieving scalability towards very large memories. So instead, HMN readers use soft attention only over a selected subset of the memory. Selecting memory subsets is guided by a maximum inner product search algorithm, which can exploit the hierarchical structure of the organized memory to retrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in more detail in the next section. | 1605.07427#9 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 9 | Figure 4: Power vs. batch size. Net power consump- tion (due only to the forward processing of several DNNs) for different batch sizes. The idle power of the TX1 board, with no HDMI screen connected, was 1.30 W on average. The max frequency component of power supply current was 1.4 kHz, corresponding to a Nyquist sampling frequency of 2.8 kHz.
# 3.1 ACCURACY
Figure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal- lenge, from the AlexNet (Krizhevsky et al., 2012), on the far left, to the best performing Inception-v4 (Szegedy et al., 2016). The newest ResNet and Inception architectures surpass all other architectures by a signiï¬cant margin of at least 7%. | 1605.07678#9 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 9 | In the spirit of the bAbI tasks conceived as question answering testbeds (Weston et al., 2015b), we designed a set of ï¬ve tasks within the goal-oriented context of restaurant reservation. Grounded with an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these tasks cover several dialog stages and test if models can learn various abilities such as performing dialog management, querying KBs, interpreting the output of such queries to continue the conversation or dealing with new entities not appearing in dialogs from the training set. In addition to showing how the set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialog system, we also propose results on two additional datasets extracted from real interactions with users, to conï¬rm that the pattern of results observed in our tasks is indeed a good proxy for what would be observed on real data, with the added beneï¬t of better reproducibility and interpretability. | 1605.07683#9 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 10 | In HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufï¬cient set of facts. While most of the standard applications of MIPS [6â8] so far have focused on settings where both query vector and database (memory) vectors are precomputed and ï¬xed, memory readers in HMNs are learning to do MIPS by updating the input representation such that the result of MIPS retrieval contains the correct fact(s).
# 3 Memory Reader with K-MIPS attention
In this section, we describe how the HMN memory reader uses Maximum Inner Product Search (MIPS) during learning and inference.
We begin with a formal deï¬nition of K-MIPS. Given a set of points X = {x1, . . . , xn} and a query vector q, our goal is to ï¬nd
argmax(2), q! x; qd)
where the argmax(K) returns the indices of the top-K maximum values. In the case of HMNs, X corresponds to the memory and q corresponds to the vector computed by the input module. | 1605.07427#10 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 10 | Figure 2 provides a different, but more informative view of the accuracy values, because it also visualises computational cost and number of networkâs parameters. The ï¬rst thing that is very ap- parent is that VGG, even though it is widely used in many applications, is by far the most expensive architecture â both in terms of computational requirements and number of parameters. Its 16- and 19-layer implementations are in fact isolated from all other networks. The other architectures form a steep straight line, that seems to start to ï¬atten with the latest incarnations of Inception and ResNet. This might suggest that models are reaching an inï¬ection point on this data set. At this inï¬ection point, the costs â in terms of complexity â start to outweigh gains in accuracy. We will later show that this trend is hyperbolic.
3.2 INFERENCE TIME
Figure 3 reports inference time per image on each architecture, as a function of image batch size (from 1 to 64). We notice that VGG processes one image in a ï¬fth of a second, making it a less likely contender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3à going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is a very surprising ï¬nding, that will be further discussed in the next subsection. | 1605.07678#10 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 10 | The goal here is explicitly not to improve the state of the art in the narrow domain of restaurant booking, but to take a narrow domain where traditional handcrafted dialog systems are known to perform well, and use that to gauge the strengths and weaknesses of current end-to-end systems with no domain knowledge. Solving our tasks requires manipulating both natural language and symbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter tracking completion of the actual goal. Figure 1 depicts the tasks and Section 3 details them. Section 4 compares multiple methods on these tasks. As an end-to-end neural model, we tested Memory Networks (Weston et al., 2015a), an attention-based architecture that has proven competitive for non goal-oriented dialog (Dodge et al., 2016). Our experiments in Section 5 show that Memory Networks can be trained to perform non-trivial operations such as issuing API calls to KBs and manipulating entities unseen in training. We conï¬rm our ï¬ndings on real human-machine dialogs
2
Published as a conference paper at ICLR 2017 | 1605.07683#10 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 11 | A simple but inefï¬cient solution for K-MIPS involves a linear search over the cells in memory by performing the dot product of q with all the memory cells. While this will return the exact result for K-MIPS, it is too costly to perform when we deal with a large-scale memory. However, in many practical applications, it is often sufï¬cient to have an approximate result for K-MIPS, trading speed-up at the cost of the accuracy. There exist several approximate K-MIPS solutions in the literature [8, 9, 7, 10].
All the approximate K-MIPS solutions add a form of hierarchical structure to the memory and visit only a subset of the memory cells to ï¬nd the maximum inner product for a given query. Hashing-based approaches [8â10] hash cells into multiple bins, and given a query they search for K-MIPS cell vectors only in bins that are close to the bin associated with the query. Tree-based approaches [6, 7] create search trees with cells in the leaves of the tree. Given a query, a path in the tree is followed and MIPS is performed only for the leaf for the chosen path. Clustering-based approaches [11] cluster
3 | 1605.07427#11 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 11 | # 3.3 POWER
Power measurements are complicated by the high frequency swings in current consumption, which required high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digital oscilloscope with a current probe, as reported in section 2. Other measuring instruments, such as an AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hz sampling rate, did not provide enough bandwidth to properly conduct power measurements.
In ï¬gure 4 we see that the power consumption is mostly independent with the batch size. Low power values for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times per image, as shown in ï¬gure 3.
3
BN-NIN GoogLeNet Inception-v3 AlexNet BN-AlexNet VGG-16 VGG-19 ResNet is ResNet-34 ResNet'50 ResNet-101 2000 1000 Maximum net memory utilisation [MB] Batch size[/]
Batch of 1 image ee Go «ee 0 100 200 300 400 500 Parameters [MB]
Figure 5: Memory vs. batch size. Maximum sys- tem memory utilisation for batches of different sizes. Memory usage shows a knee graph, due to the net- work model memory static allocation and the variable memory used by batch size. | 1605.07678#11 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 11 | 2
Published as a conference paper at ICLR 2017
Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (â) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words.
Tasks DIALOGS Average statistics Number of utterances: - user utterances - bot utterances - outputs from API calls DATASETS Vocabulary size Candidate set size Training dialogs Tasks 1-5 share the Validation dialogs same data source Test dialogs T1 T2 T3 T4 T5 55 43 12 13 7 5 18 10 7 24 23 0 3,747 4,212 1,000 1,000 1,000(â) 17 7 10 0 15 4 4 7 T6 54 6 8 40 1,229 2,406 1,618 500 1,117 Concierge 8 4 4 0 8,629 11,482 3,249 403 402 | 1605.07683#11 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 12 | 3
cells into multiple clusters (or a hierarchy of clusters) and given a query, they perform MIPS on the centroids of the top few clusters. We refer the readers to [11] for an extensive comparison of various state-of-the-art approaches for approximate K-MIPS.
Our proposal is to exploit this rich approximate K-MIPS literature to achieve scalable training and inference in HMNs. Instead of ï¬ltering the memory with heuristics, we propose to organize the memory based on approximate K-MIPS algorithms and then train the reader to learn to perform MIPS. Speciï¬cally, consider the following softmax over the memory which the reader has to perform for every reading step to retrieve a set of relevant candidates:
Rout = softmax(h(q)M T ) (2) where h(q) â Rd is the representation of the query, M â RN Ãd is the memory with N being the total number of cells in the memory. We propose to replace this softmax with softmax(K) which is deï¬ned as follows: | 1605.07427#12 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 12 | Figure 6: Memory vs. parameters count. De- tailed view on static parameters allocation and cor- responding memory utilisation. Minimum memory of 200 MB, linear afterwards with slope 1.30.
Batch of 1 image Batch of 16 images 10 e e e ® âee 0 20 40 60 ao 1001202140160) Foward time per image [ms] 40 60 ao 100=« 12040160) Foward time per image [ms]
# Operations [G-Ops]
Figure 7: Operations vs. inference time, size â parameters. Relationship between operations and inference time, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, we notice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore, we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inference time due to batch processing optimisation.
3.4 MEMORY | 1605.07678#12 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 12 | from the restaurant reservation dataset of the 2nd Dialog State Tracking Challenge, or DSTC2 (Henderson et al., 2014a), which we converted into our task format, showing that Memory Networks can outperform a dedicated slot-ï¬lling rule-based baseline. We also evaluate on a dataset of human- human dialogs extracted from an online concierge service that books restaurants for users. Overall, the per-response performance is encouraging, but the per-dialog one remains low, indicating that end-to-end models still need to improve before being able to reliably handle goal-oriented dialog.
# 2 RELATED WORK
The most successful goal-oriented dialog systems model conversation as partially observable Markov decision processes (POMDP) (Young et al., 2013). However, despite recent efforts to learn modules (Henderson et al., 2014b), they still require many hand-crafted features for the state and action space representations, which restrict their usage to narrow domains. Our simulation, used to generate goal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP (Young et al., 2013; Pietquin and Hastie, 2013), but for training end-to-end systems. | 1605.07683#12 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 13 | C = argmax(K) h(q)M T Rout = softmax(K)(h(q)M T ) = softmax(h(q)M [C]T ) (4) where C is the indices of top-K MIP candidate cells and M [C] is a sub-matrix of M where the rows are indexed by C. One advantage of using the softmax(K) is that it naturally focuses on cells that would normally receive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwise more dispersed across cells, given the large number of cells and despite many contributing a small gradient. As our experiments will show, this results in slower training. One problematic situation when learning with the softmax(K) is when we are at the initial stages of training and the K-MIPS reader is not including the correct fact candidate. To avoid this issue, we always include the correct candidate to the top-K candidates retrieved by the K-MIPS algorithm, effectively performing a fully supervised form of learning. | 1605.07427#13 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 13 | 3.4 MEMORY
We analysed system memory consumption of the TX1 device, which uses shared memory for both CPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant and then raises with the batch size. This is due the initial memory allocation of the network model â which is the large static component â and the contribution of the memory required while processing the batch, proportionally increasing with the number of images. In ï¬gure 6 we can also notice that the initial allocation never drops below 200 MB, for network sized below 100 MB, and it is linear afterwards, with respect to the parameters and a slope of 1.30.
# 3.5 OPERATIONS
Operations count is essential for establishing a rough estimate of inference time and hardware circuit size, in case of custom implementation of neural network accelerators. In ï¬gure 7, for a batch of 16 images, there is a linear relationship between operations count and inference time per image. Therefore, at design time, we can pose a constraint on the number of operation to keep processing speed in a usable range for real-time applications or resource-limited deployments.
4
Batch of 1 image Batch of 16 images Operations [G-Ops] 10 e ° @ 5 ; e e) ® | @ sa. te Net power consumption [W] Net power consumption [W] | 1605.07678#13 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 13 | Serban et al. (2015b) list available corpora for training dialog systems. Unfortunately, no good resources exist to train and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets are usually designed to train or test dialog state tracker components (Henderson et al., 2014a) and are hence of limited scale and not suitable for end-to-end learning (annotated at the state level and noisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some datasets are not open source, and require a particular license agreement or the participation to a challenge (e.g., the end-to-end task of DSTC4 (Kim et al., 2016)) or are proprietary (e.g., Chen et al. (2016)). Datasets are often based on interactions between users and existing systems (or ensemble of systems) like DSTC datasets, SFCore (Gašic et al., 2014) or ATIS (Dahl et al., 1994). This creates noise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect dialog systems to users, in particular in the context of reinforcement learning, are usually built around a crowdsourcing setting such as Amazon | 1605.07683#13 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 14 | During training, the reader is updated by backpropagation from the output module, through the subset of memory cells. Additionally, the log-likelihood of the correct fact computed using K-softmax is also maximized. This second supervision helps the reader learn to modify the query such that the maximum inner product of the query with respect to the memory will yield the correct supporting fact in the top K candidate set.
Until now, we described the exact K-MIPS-based learning framework, which still requires a linear look-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios, we can replace the exact K-MIPS in the training procedure with the approximate K-MIPS. This is achieved by deploying a suitable memory hierarchical structure. The same approximate K-MIPS- based reader can be used during inference stage as well. Of course, approximate K-MIPS algorithms might not return the exact MIPS candidates and will likely to hurt performance, but at the beneï¬t of achieving scalability. | 1605.07427#14 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 14 | Figure 8: Operations vs. power consumption, size â parameters. Independency of power and operations is shown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisation and lower inference time for AlexNet architecture is reached with larger batches.
Batch of 1 image Batch of 16 images â\@ @ 1 â C) 5 gâ ® e@ ® z e? e. Bos a e e 60 . o ss é é Images per second [Hz] Images per second [Hz]
Figure 9: Accuracy vs. inferences per second, size â operations. Non trivial linear upper bound is shown in these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examined architectures. These are the ï¬rst charts in which the area of the blobs is proportional to the amount of operations, instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts, in correspondence of low throughput, i.e. longer inference times. Most of the architectures lay on the linear interface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptional accuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architectures improve processing speed as larger batches are adopted, gaining 80 Hz.
3.6 OPERATIONS AND POWER | 1605.07678#14 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07427 | 15 | While the memory representation is ï¬xed in this paper, updating the memory along with the query representation should improve the likelihood of choosing the correct fact. However, updating the memory will reduce the precision of the approximate K-MIPS algorithms, since all of them assume that the vectors in the memory are static. Designing efï¬cient dynamic K-MIPS should improve the performance of HMNs even further, a challenge that we hope to address in future work.
# 3.1 Reader with Clustering-based approximate K-MIPS
Clustering-based approximate K-MIPS was proposed in [11] and it has been shown to outperform various other state-of-the-art data dependent and data independent approximate K-MIPS approaches for inference tasks. As we will show in the experiments section, clustering-based MIPS also performs better when used to training HMNs. Hence, we focus our presentation on the clustering-based approach and propose changes that were found to be helpful for learning HMNs.
Following most of the other approximate K-MIPS algorithms, [11] converts MIPS to Maximum Cosine Similarity Search (MCSS) problem:
argmax(K) iâX qT xi ||q|| ||xi|| = argmax(K) iâX qT xi ||xi|| (5)
4 | 1605.07427#15 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 15 | 3.6 OPERATIONS AND POWER
In this section we analyse the relationship between power consumption and number of operations required by a given model. Figure 8 reports that there is no speciï¬c power footprint for different ar- chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networks consume roughly an additional 11.8 W, with a standard deviation of 0.7 W. Idle power is 1.30 W. This corresponds to the maximum system power at full utilisation. Therefore, if energy consumption is one of our concerns, for example for battery-powered devices, one can simply choose the slowest architecture which satisï¬es the application minimum requirements.
3.7 ACCURACY AND THROUGHPUT
We note that there is a non-trivial linear upper bound between accuracy and number of inferences per unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can be achieved is linearly proportional to the frame rate itself. All networks analysed here come from several publications, and have been independently trained by other research groups. A linear ï¬t of the accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a speciï¬c inference time, one can now come up with the theoretical accuracy upper bound when resources are fully
5 | 1605.07678#15 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 15 | The closest resource to ours might be the set of tasks described in (Dodge et al., 2016), since some of them can be seen as goal-oriented. However, those are question answering tasks rather than dialog, i.e. the bot only responds with answers, never questions, which does not reï¬ect full conversation.
# 3 GOAL-ORIENTED DIALOG TASKS
All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant. The ï¬rst ï¬ve tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http://fb.ai/babi. We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users.
3
Published as a conference paper at ICLR 2017
3.1 RESTAURANT RESERVATION SIMULATION | 1605.07683#15 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
1605.07427 | 16 | argmax(K) iâX qT xi ||q|| ||xi|| = argmax(K) iâX qT xi ||xi|| (5)
4
When all the data vectors xi have the same norm, then MCSS is equivalent to MIPS. However, it is often restrictive to have this additional constraint. Instead, [11] appends additional dimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, this would correspond to adding a few more dimensions to the memory cells and input representations. The algorithm introduces two hyper-parameters, U < 1 and m â Nâ. The ï¬rst step is to scale all the vectors in the memory by the same factor, such that maxi ||xi||2 = U . We then apply two mappings, P and Q, on the memory cells and on the input vector, respectively. These two mappings simply concatenate m new components to the vectors and make the norms of the data points all roughly the same [9]. The mappings are deï¬ned as follows: | 1605.07427#16 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | [
{
"id": "1507.05910"
},
{
"id": "1502.05698"
},
{
"id": "1503.08895"
},
{
"id": "1506.02075"
}
] |
1605.07678 | 16 | 5
Top-1 accuracy density (%/M-Params} < or ooâ PN? dh ne so a ok eas ee et
Figure 10: Accuracy per parameter vs. network. Information density (accuracy per parameters) is an efï¬- ciency metric that highlight that capacity of a speciï¬c architecture to better utilise its parametric space. Models like VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil- ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at âsqueezingâ all their neurons to learn the given task, and are the winners of this section.
utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one step further, and obtain an upper bound in accuracy even for an energetic constraint, which could possibly be an essential designing factor for a network that needs to run on an embedded system. | 1605.07678#16 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | [
{
"id": "1602.07261"
},
{
"id": "1606.02147"
},
{
"id": "1512.03385"
},
{
"id": "1512.00567"
},
{
"id": "1510.00149"
}
] |
1605.07683 | 16 | 3
Published as a conference paper at ICLR 2017
3.1 RESTAURANT RESERVATION SIMULATION
The simulation is based on an underlying KB, whose facts contain the restaurants that can be booked and their properties. Each restaurant is deï¬ned by a type of cuisine (10 choices, e.g., French, Thai), a location (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a rating (from 1 to 8). For simplicity, we assume that each restaurant only has availability for a single party size (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB.
The KB can be queried using API calls, which return the list of facts related to the corresponding restaurants. Each query must contain four ï¬elds: a location, a type of cuisine, a price range and a party size. It can return facts concerning one, several or no restaurant (depending on the party size). | 1605.07683#16 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | [
{
"id": "1512.05742"
},
{
"id": "1508.03386"
},
{
"id": "1605.05414"
},
{
"id": "1508.03391"
},
{
"id": "1508.01745"
},
{
"id": "1502.05698"
},
{
"id": "1503.02364"
},
{
"id": "1506.08909"
},
{
"id": "1603.08023"
},
{
"id": "1506.05869"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.