doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.04546 | 12 | Aw; = ot =-a OL + rw; |}, (5) Ow; Ow;
where a is the learning rate. As we can see from Eq. @). the gradient magnitude of the L2 penalty is proportional to ||w;||,, thus forms a negative feedback loop that stabilizes ||w;||, to an equilibrium value. Empirically, we find that ||w;||, tends to increase or decrease dramatically at the beginning of
3 (3)
the training, and then varies mildly within a small range, which indicates ||w;||, ~ |}wi + Awi|lo. In practice, we usually have || Aw;||, / ||wil|2 < 1, thus Aw; is approximately orthogonal to w, i.e. w;:- Aw; = 0.
the training, and then varies mildly within a small range, which indicates In practice, we usually have || Aw;||, / ||wil|2 < 1, thus Aw; is approximately w;:- Aw; = 0. Let J)),,, and, be the vector projection and rejection of pe on w;, which
# âwi â
on wi, which are deï¬ned as
OL Wi Wi OL ly : liw = Iy,. 6 Mei (3 ws -) \|willoâ Lu Ow; Mei ©)
# OL | 1709.04546#12 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 13 | OL Wi Wi OL ly : liw = Iy,. 6 Mei (3 ws -) \|willoâ Lu Ow; Mei ©)
# OL
From Eq. (5) and (6}, it is easy to show
|Awille ~ Ew laay. (7) Twill, â U0, 2
As discussed in Sec.|2.1| when batch normalization is used, or when linear rectifiers are used as activation functions, the magnitude of ||w;||, becomes irrelevant; it is the direction of w; that actually makes a difference in the overall network function. If L2 weight decay is not applied, the magnitude of w;âs direction change will decrease as ||w;||, increases during the training process, which can potentially lead to overfitting (discussed in detail in Sec. . On the other hand, Eq. (7) shows that L2 weight decay implicitly normalizes the weights, such that the magnitude of w;âs direction change does not depend on ||w;||,, and can be tuned by the product of a and 4. In the following, we refer to ||Aw;||, / will. as the effective learning rate of w;.
While L2 weight decay produces the normalization effect in an implicit and approximate way, we will show that explicitly doing so enables more precise control of the effective learning rate. | 1709.04546#13 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 14 | While L2 weight decay produces the normalization effect in an implicit and approximate way, we will show that explicitly doing so enables more precise control of the effective learning rate.
# 3 NORMALIZED DIRECTION-PRESERVING ADAM
We ï¬rst present the normalized direction-preserving Adam (ND-Adam) algorithm, which essentially improves the optimization of the input weights of hidden units, while employing the vanilla Adam algorithm to update other parameters. Speciï¬cally, we divide the trainable parameters, θ, into two . Then we update θv and θs by sets, θv and θs, such that θv = different rules, as described by Alg. 1. The learning rates for the two sets of parameters are denoted by αv
In Alg. 1, computing gt (wi) and wi,t may take slightly more time compared to Adam, which how- ever is negligible in practice. On the other hand, to estimate the second order moment of each Rn, Adam maintains n scalars, whereas ND-Adam requires only one scalar, vt (wi), and thus wi â reduces the memory overhead of Adam. | 1709.04546#14 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 15 | In the following, we address the direction missing problem and the ill-conditioning problem dis- cussed in Sec. 2.1, and explain Alg. 1 in detail. We show how the proposed algorithm jointly solves the two problems, as well as its relation to other normalization schemes.
3.1 PRESERVING GRADIENT DIRECTIONS
Assuming the stationarity of a hidden unitâs input distribution, the SGD update (possibly with mo- mentum) of the input weight vector is a linear combination of historical gradients, and thus can only lie in the span of the input vectors. Consequently, the input weight vector itself will eventually converge to the same subspace.
In contrast, the Adam algorithm adapts the global learning rate to each scalar parameter indepen- dently, such that the gradient of each parameter is normalized by a running average of its magnitudes, which changes the direction of the gradient. To preserve the direction of the gradient w.r.t. each input weight vector, we generalize the learning rate adaptation scheme from scalars to vectors.
Let gt (wi), mt (wi), vt (wi) be the counterparts of gt, mt, vt for vector wi. Since Eq. (1a) is a linear combination of historical gradients, it can be extended to vectors without any change; or equivalently, we can rewrite it for each vector as | 1709.04546#15 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 16 | mt (wi) = β1mtâ1 (wi) + (1 β1) gt (wi) . (8)
â
# Algorithm 1: Normalized direction-preserving Adam /* Initialization t â for i
/* Initialization */ t+ 0; for ic N do win â wio/ [lweollys mo (wi) = 05 vo (wi) â 03 /* Perform T iterations of training «/ while t < T do tet+l; /* Update 6â x/ for i ¢ N do H (wi) â OL/duy; ge (wi) â Ge (wi) â (Ge (Wi) - Witâ1) Wieâ15 my (wi) â Gime (wi) + (1 â 81) gt (wi); vp (wi) & Bove (wi) + (1 = Be) | ge (wa) | 3; rie (wi) â me (wi) / (1 â BE); Br (wi) â ve (wi) / (1 â 88); Wit â Witâ1 â Af Ty (wi) / ( b, (wi) + e): wit â Wit/ lle tllo3 /* Update 0° using Adam «/ 0; < AdamUpdate (974; ag, B1, 62); return 67;
We then extend Eq. (1b) as | 1709.04546#16 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 17 | We then extend Eq. (1b) as
2 2 , vt (wi) = β2vtâ1 (wi) + (1 β2)
# II ge (ws)|I3
â
i.e., instead of estimating the average gradient magnitude for each individual parameter, we estimate 2 2 for each vector wi. In addition, we modify Eq. (2) and (3) accordingly as the average of
||g: (wi)
and
Ëmt (wi) = mt (wi) βt 1 1 , Ëvt (wi) = vt (wi) βt 1 2 , (10)
â
â
» Wit = Wit-1 â "iy, (wi) - (1) 01 (wi) +â¬
Here, Ëmt (wi) is a vector with the same dimension as wi, whereas Ëvt (wi) is a scalar. Therefore, when applying Eq. (11), the direction of the update is the negative direction of Ëmt (wi), and thus is in the span of the historical gradients of wi. | 1709.04546#17 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 18 | Despite the empirical success of SGD, a question remains as to why it is desirable to constrain the input weights in the span of the input vectors. A possible explanation is related to the manifold hypothesis, which suggests that real-world data presented in high dimensional spaces (e.g., images, audios, text) concentrates on manifolds of much lower dimensionality (Cayton, 2005; Narayanan & Mitter, 2010). In fact, commonly used activation functions, such as (leaky) ReLU, sigmoid, tanh, can only be activated (not saturating or having small gradients) by a portion of the input vectors, in whose span the input weights lie upon convergence. Assuming the local linearity of the manifolds of data or hidden-layer representations, constraining the input weights in the subspace that contains that portion of the input vectors, encourages the hidden units to form local coordinate systems on the corresponding manifold, which can lead to good representations (Rifai et al., 2011).
3.2 SPHERICAL WEIGHT OPTIMIZATION
The ill-conditioning problem occurs when the magnitude change of an input weight vector can be compensated by other parameters, such as the scaling factor of batch normalization, or the output
(9) | 1709.04546#18 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 19 | The ill-conditioning problem occurs when the magnitude change of an input weight vector can be compensated by other parameters, such as the scaling factor of batch normalization, or the output
(9)
weight vector, without affecting the overall network function. Consequently, suppose we have two DNNs that parameterize the same function, but with some of the input weight vectors having differ- ent magnitudes, applying the same SGD or Adam update rule will, in general, change the network functions in different ways. Thus, the ill-conditioning problem makes the training process inconsis- tent and difï¬cult to control.
More importantly, when the weights are not properly regularized (e.g., without using L2 weight decay), the magnitude of w;,âs direction change will decrease as ||w;|| increases during the training process. As a result, the effective learning rate for w; tends to decrease faster than expected. The gradient noise introduced by large learning rates is crucial to avoid sharp minima (Smith & Le! (2018). And it is well known that sharp minima generalize worse than flat minima (Hochreiter &| Schmidhuber}| 1997). | 1709.04546#19 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 20 | As shown in Sec. when combined with SGD, L2 weight decay can alleviate the ill-conditioning problem by implicitly and approximately normalizing the weights. However, the approximation fails when ||2w;||) is far from the equilibrium due to improper initialization, or drastic changes in the magnitudes of the weight vectors. In addition, due to the direction missing problem, naively applying L2 weight decay to Adam does not yield the same effect as it does on SGD. In concurrent work, |Loshchilov & Hutterâ 2017ap address the problem by decoupling the weight decay and the optimization steps taken w.r.t. the loss function. However, their experimental results indicate that improving L2 weight decay alone cannot eliminate the generalization gap between Adam and SGD.
The ill-conditioning problem is also addressed by Neyshabur et al. (2015), by employing a geometry invariant to rescaling of weights. However, their proposed methods do not preserve the direction of gradient. | 1709.04546#20 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 21 | To address the ill-conditioning problem in a more principled way, we restrict the L2-norm of each wi to 1, and only optimize its direction. In other words, instead of optimizing wi in a n-dimensional 1)-dimensional unit sphere. Speciï¬cally, we ï¬rst compute the raw space, we optimize wi on a (n gradient w.r.t. wi, ¯gt (wi) = âL/âwi, and project the gradient onto the unit sphere as
Here,
gt (wi) = ¯gt (wi) (¯gt (wi) (12)
ge (wi) =e (we) â (Ge (wi) + wea) wie. ||wisâ1||, = 1. Then we follow Eq. {8)-{I0}, and replace with _ a? . Wit = Wit-1 â âââââ mr (wi), and wig =
â | 1709.04546#21 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 22 | _ a? . Wit Wit = Wit-1 â âââââ mr (wi), and wig = â_. (13) dy (wi) + ⬠@itlle
In Eq. (12), we keep only the component that is orthogonal to w;,,-1. However, 77; (w;) is not necessarily orthogonal as well; moreover, even when 1, (w;) is orthogonal to w;,4â1, ||w;||) can still increase according to the Pythagorean theorem. Therefore, we explicitly normalize w;,, in Eq. (13), to ensure lwitlle = 1 after each update. Also note that, since w;,~1 is a linear combination of its historical gradients, g; (w;) still lies in the span of the historical gradients after the projection in Eq. (12).
Compared to SGD with L2 weight decay, spherical weight optimization explicitly normalizes the weight vectors, such that each update to the weight vectors only changes their directions, and strictly keeps the magnitudes constant. As a result, the effective learning rate of a weight vector is
# Aw;.tll. [Awicle -, l|e5,2-1llo
Aw;.tll. Fi > [Awicle -, lie (wy ow ay l|e5,2-1llo 0, (wi) | 1709.04546#22 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 23 | Aw;.tll. Fi > [Awicle -, lie (wy ow ay l|e5,2-1llo 0, (wi)
which enables precise control over the learning rate of wi through a single hyperparameter, αv t , rather than two as required by Eq. (7).
Note that it is possible to control the effective learning rate more precisely, by normalizing 71, (w;) with ||72¢ (w;)||p, instead of by \/%; (wi). However, by doing so, we lose information provided by ||7iz (w;) ||. at different time steps. In addition, since rn, (w;) is less noisy than gy (w;), ||77¢ (wa) || /V/ Gz (wi) becomes small near convergence, which is considered a desirable property of Adam (Kingma & Ba\|2015). Thus, we keep the gradient normalization scheme intact.
We note the difference between various gradient normalization schemes and the normalization scheme employed by spherical weight optimization. As shown in Eq. (11), ND-Adam general- izes the gradient normalization scheme of Adam, and thus both Adam and ND-Adam normalize | 1709.04546#23 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 24 | the gradient by a running average of its magnitude. This, and other similar schemes (Hazan et al., 2015; Yu et al., 2017) make the optimization less susceptible to vanishing and exploding gradients. The proposed spherical weight optimization serves a different purpose. It normalizes each weight vector and projects the gradient onto a unit sphere, such that the effective learning rate can be con- trolled more precisely. Moreover, it provides robustness to improper weight initialization, since the magnitude of each weight vector is kept constant.
For nonlinear activation functions (without batch normalization), such as sigmoid and tanh, an extra scaling factor is needed for each hidden unit to express functions that require unnormalized weight ), the activation of hidden vectors. For instance, given an input vector x · unit i is then given by
# â yi = Ï (γiwi ·
(15) where γi is the scaling factor, and bi is the bias. Consequently, normalizing weight vectors does not limit the expressiveness of models.
# 3.3 RELATION TO WEIGHT NORMALIZATION AND BATCH NORMALIZATION | 1709.04546#24 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 25 | # 3.3 RELATION TO WEIGHT NORMALIZATION AND BATCH NORMALIZATION
A related normalization and reparameterization scheme, weight normalization (Salimans & Kingma, 2016), has been developed as an alternative to batch normalization, aiming to accelerate the conver- gence of SGD optimization. We note the difference between spherical weight optimization and weight normalization. First, the weight vector of each hidden unit is not directly normalized in = 1 in general. At training time, the activation of hidden unit i is weight normalization, i.e,
||w;||,
n=ol 7% web), (16) llewills which is equivalent to Eq. ) for the forward pass. For the backward pass, the effective learning rate still depends on ||w,||, in weight normalization, hence it does not solve the ill-conditioning problem. At inference time, both of these two schemes can merge w; and 4; into a single equivalent weight vector, w} = y;,w;, or w} = eRâ
While spherical weight optimization naturally encompasses weight normalization, it can further beneï¬t from batch normalization. When combined with batch normalization, Eq. (15) evolves into
x) + bi) , (17)
# yi = Ï (γi BN (wi · | 1709.04546#25 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 26 | x) + bi) , (17)
# yi = Ï (γi BN (wi ·
where BN ( ) represents the transformation done by batch normalization without scaling and shift- · ing. Here, γi serves as the scaling factor for both the normalized weight vector and batch normal- ization.
# 4 REGULARIZED SOFTMAX
For multi-class classiï¬cation tasks, the softmax function is the de facto activation function for the output layer. Despite its simplicity and intuitive probabilistic interpretation, we observe a related problem to the ill-conditioning problem we have addressed. Similar to how different magnitudes of weight vectors result in different updates to the same network function, the learning signal back- propagated from the softmax layer varies with the overall magnitude of the logits.
Specifically, when using cross entropy as the surrogate loss with one-hot target vectors, the predic- tion is considered correct as long as arg max,<c (2-) is the target class, where z, is the logit before the softmax activation, corresponding to category c ⬠C. Thus, the logits can be positively scaled together without changing the predictions, whereas the cross entropy and its derivatives will vary with the scaling factor. Concretely, denoting the scaling factor by 7, the gradient w.r.t. each logit is aL exp (2e) 2 (nize) | 1709.04546#26 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 27 | aL exp (2e) and 2 nexp (nize) O22" LDeccexp(nze) | Oz Nee EXD (1%) : (18)
where Ëc is the target class, and ¯c
⬠Câ C\ {
# . Ëc }
For Adam and ND-Adam, since the gradient w.r.t. each scalar or vector are normalized, the absolute magnitudes of Eq. (18) are irrelevant. Instead, the relative magnitudes make a difference here. When
η is small, we have
OL/dz ol OL/dzz| |C|â-1" (19) im n-0
|C| â
which indicates that, when the magnitude of the logits is small, softmax encourages the logit of the target class to increase, while equally penalizing that of the other classes, regardless of the difference in Ëz . However, it is more reasonable to penalize more the logits that are Ëz } closer to Ëz, which are more likely to cause misclassiï¬cation.
On the other end of the spectrum, assuming no two digits are the same, we have
AL/dze| _, ,, |OL/Oze" ace 1, hm | OL /ox lim 00 =0, (20) | 1709.04546#27 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 28 | AL/dze| _, ,, |OL/Oze" ace 1, hm | OL /ox lim 00 =0, (20)
where ¢â = arg max,¢c\ 42} (Zc), and éâ ⬠C\ {é,c'}. Eq. (20) indicates that, when the magnitude of the logits is large, softmax penalizes only the largest logit of the non-target classes. In this case, although the logit that is most likely to cause misclassification is strongly penalized, the logits of other non-target classes are ignored. As a result, the logits of the non-target classes tend to be similar at convergence, ignoring the fact that some classes are closer to each other than the others. The latter case is related to the saturation problem of softmax discussed in the literature (Oland et al.||2017), where they focus on the problem of small absolute gradient magnitude, which nevertheless does not affect Adam and ND-Adam. | 1709.04546#28 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 29 | We propose two methods to exploit the prior knowledge that the magnitude of the logits should not be too small or too large. First, we can apply batch normalization to the logits. But instead of setting γcâs as trainable variables, we consider them as a single hyperparameter, γC, such that . Tuning the value of γC can lead to a better trade-off between the two extremes γc = γC, described by Eq. (19) and (20). We observe in practice that the optimal value of γC tends to be the same for different optimizers or different network widths, but varies with network depth. We refer to this method as batch-normalized softmax (BN-Softmax).
Alternatively, since the magnitude of the logits tends to grow larger than expected (in order to mini- mize the cross entropy), we can apply L2-regularization to the logits by adding the following penalty to the loss function:
Xe 2 be= Fh ceC (21)
# câC
where λC is a hyperparameter to be tuned. Different from BN-Softmax, λC can also be shared by different networks of different depths.
# 5 EXPERIMENTS | 1709.04546#29 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 31 | To empirically examine the effect of L2 weight decay, we train a wide residual network (WRN) (Zagoruyko & Komodakis, 2016b) of 22 layers, with a width of 7.5 times that of a vanilla ResNet. Using the notation suggested by Zagoruyko & Komodakis (2016b), we refer to this network as WRN-22-7.5. We train the network on the CIFAR-10 dataset (Krizhevsky & Hinton, 2009), with a small modiï¬cation to the original WRN architecture, and with a different learning rate anneal- ing schedule. Speciï¬cally, for simplicity and slightly better performance, we replace the last fully connected layer with a convolutional layer with 10 output feature maps. i.e., we change the layers after the last residual block from BN-ReLU-GlobalAvgPool-FC-Softmax to BN-ReLU-Conv-GlobalAvgPool-Softmax. In addition, for clearer comparisons, the learn- ing rate is annealed according to a cosine function without restart (Loshchilov & Hutter, 2017b; Gastaldi, 2017). We train the model for 80k iterations with a batch size | 1709.04546#31 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 33 | As a common practice, we use SGD with a momentum of 0.9, the analysis for which is similar to that in Sec. 2.2] Due to the linearity of derivatives and momentum, Aw; can be decomposed as Aw; = Aw! + Aw?, where Aw! and Aw? are the components corresponding to the original loss function, L (-), and the L2 penalty term (see Eq. {4)), respectively. Fig [lalshows the ratio between the scalar projection of Aw! on Aw? and ||Aw?][,, which indicates how the tendency of Aw! to increase ||w;||, is compensated by Aw?. Note that Aw? points to the negative direction of w;, even when momentum is used, since the direction change of w; is slow. As shown in Fig. [Ta] at the beginning of the training, Aw? dominants and quickly adjusts ||w;||, to its equilibrium value. During the middle stage of the training, the projection of Aw! on Aw?, and Aw? almost cancel each other. Then, towards the end of the training, the gradient of w; diminishes rapidly, making Aw? dominant again. Therefore, Eq. (7) holds more accurately during the middle stage of the training. | 1709.04546#33 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 34 | In Fig. we show how the effective learning rate varies in different hyperparameter settings. By Eq. (7), JAwi|l, / ||willz is expected to remain the same as long as a stays constant, which is confirmed by the fact that the curve for ag = 0.1, \ = 0.001 overlaps with that for ag = 0.05, A = 0.002. However, comparing the curve for ag = 0.1,A = 0.001, with that for ag = 0.1,A = 0.0005, we can see that the value of ||Azw;||, / ||w;||, does not change proportionally to a. On the other hand, by using ND-Adam, we can control the value of || Aw;||, / ||w;||, more precisely by adjusting the learning rate for weight vectors, aâ. For the same training step, changes in aâ lead to approximately proportional changes in ||Aw;||, / ||wi||, as shown by the two curves corresponding to ND-Adam in Fig. [Ib]
5. = 0,002 0.1, 4 = 0.0005 12 0.000 0 10000 2000030000 40000-50000 6oN00â~7OUD0 SOKO > 1000020000 00040000 50000-6000 training steps training steps 7000080000 | 1709.04546#34 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 36 | 5.2 PERFORMANCE EVALUATION
To compare the generalization performance of SGD, Adam, and ND-Adam, we train the same WRN- 22-7.5 network on the CIFAR-10 and CIFAR-100 datasets. For SGD and ND-Adam, we ï¬rst tune the hyperparameters for SGD (α0 = 0.1, λ = 0.001, momentum 0.9), then tune the initial learning rate of ND-Adam for weight vectors to match the effective learning rate to that of SGD, i.e., αv 0 = 0.05, as shown in Fig. 1b. While L2 weight decay can greatly affect the performance of SGD, it does not noticeably beneï¬t Adam in our experiments. For Adam and ND-Adam, β1 and β2 are set to the default values of Adam, i.e., β1 = 0.9, β2 = 0.999. Although the learning rate of Adam is usually set to a constant value, we observe better performance with the cosine learning rate schedule. The initial learning rate of Adam (α0), and that of ND-Adam for scalar parameters (αs 0) are both tuned to 0.001. We use horizontal ï¬ips and random crops for data augmentation, and no dropout is used. | 1709.04546#36 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 37 | We ï¬rst experiment with the use of trainable scaling parameters (γi) of batch normalization. As shown in Fig. 2, at convergence, the test accuracies of ND-Adam are signiï¬cantly improved upon that of vanilla Adam, and matches that of SGD. Note that at the early stage of training, the test accu- racies of Adam increase more rapidly than that of ND-Adam and SGD. However, the test accuracies remain at a high level afterwards, which indicates that Adam tends to quickly ï¬nd and get stuck in bad local minima that do not generalize well.
The average results of 3 runs are summarized in the ï¬rst part of Table 1. Interestingly, compared to SGD, ND-Adam shows slightly better performance on CIFAR-10, but worse performance on CIFAR-100. This inconsistency may be related to the problem of softmax discussed in Sec. 4, that there is a lack of proper control over the magnitude of the logits. But overall, given comparable ef- fective learning rates, ND-Adam and SGD show similar generalization performance. In this sense, the effective learning rate is a more natural learning rate measure than the learning rate hyperparam- eter. | 1709.04546#37 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 38 | â SGD: ay = 0.1, = 0.001 ag = 0.05 0 1000 2000030000900 «5000069007000 S000 fi 100 20000-30000 40000 50000-69000 70000 $0000 training steps training steps
Figure 2: Test accuracies of the same network trained with SGD, Adam, and ND-Adam. De- tails are shown in the ï¬rst part of Table 1. Figure 3: Magnitudes of softmax logits in differ- ent settings. Results of WRN-22-7.5 networks trained on CIFAR-10.
Next, we repeat the experiments with the use of BN-Softmax. As discussed in Sec. 3.2, γiâs can be removed from a linear rectiï¬er network, without changing the overall network function. Although this property does not strictly hold for residual networks due to the skip connections, we observe that when BN-Softmax is used, simply removing the scaling factors results in slightly better performance for all three algorithms. Thus, we only report results for this setting. The scaling factor of the logits, γC, is set to 2.5 for CIFAR-10, and 1 for CIFAR-100. | 1709.04546#38 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 39 | As shown in the second part of Table 1, while we obtain the best generalization performance with ND-Adam, the improvement is most prominent for Adam, and is relatively small for SGD. This discrepancy can be explained by comparing the magnitudes of softmax logits without regularization. As shown in Fig. 3, the magnitude of logits corresponding to Adam is much larger than that of ND- Adam and SGD, and therefore beneï¬ts more from the regularization.
Table 1: Test error rates of WRN-22-7.5 net- works on CIFAR-10 and CIFAR-100. Based on a TensorFlow implementation of WRN.
Table 2: Test error rates of WRN-22-7.5 and WRN-28-10 networks on CIFAR-10 and CIFAR-100. Based on the original implemen- tation of WRN.
# CIFAR-10 Error (%)
# CIFAR-100 Error (%) | 1709.04546#39 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 41 | While the TensorFlow implementation we use already provides an adequate test bed, we notice that it is different from the original implementation of WRN in several aspects. For instance, they use different nonlinearities (leaky ReLU vs. ReLU), and use different skip connections for down- sampling (average pooling vs. strided convolution). A subtle yet important difference is that, L2regularization is applied not only to weight vectors, but also to the scales and biases of batch normal- ization in the original implementation, which leads to better generalization performance. For further comparison between SGD and ND-Adam, we reimplement ND-Adam and test its performance on a PyTorch version of the original implementation (Zagoruyko & Komodakis, 2016a). | 1709.04546#41 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 42 | Due to the aforementioned differences, we use a slightly different hyperparameter setting in this experiment. Speciï¬cally, for SGD λ is set to 5e 6 (L2- 4, while for ND-Adam λ is set to 5e regularization for biases), and both αs 0 are set to 0.04. In this case, regularizing softmax does not yield improved performance for SGD, since the L2-regularization applied to γiâs and the last layer weights can serve a similar purpose. Thus, we only apply L2-regularized softmax for ND-Adam with λC = 0.001. The average results of 3 runs are summarized in Table 2. Note that the performance of SGD for WRN-28-10 is slightly better than that reported with the original imple- mentation (i.e., 4.00 and 19.25), due to the modiï¬cations described in Sec. 5.1. In this experiment, SGD and ND-Adam show almost identical generalization performance.
# 6 CONCLUSION | 1709.04546#42 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 43 | # 6 CONCLUSION
We introduced ND-Adam, a tailored version of Adam for training DNNs, to bridge the general- ization gap between Adam and SGD. ND-Adam is designed to preserve the direction of gradient for each weight vector, and produce the regularization effect of L2 weight decay in a more precise and principled way. We further introduced regularized softmax, which limits the magnitude of soft- max logits to provide better learning signals. Combining ND-Adam and regularized softmax, we show through experiments signiï¬cantly improved generalization performance, eliminating the gap between Adam and SGD. From a high-level view, our analysis and empirical results suggest the need for more precise control over the training process of DNNs.
# REFERENCES
Devansh Arpit, StanisÅaw JastrzËebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin- der S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In International Conference on Machine Learning, 2017.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In International Conference on Machine Learning. | 1709.04546#43 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 44 | Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In International Conference on Machine Learning.
Lawrence Cayton. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, pp. 1â17, 2005.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Xavier Gastaldi. Shake-shake regularization of 3-branch residual networks. In Workshop of Inter- national Conference on Learning Representations, 2017.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬er neural networks. International Conference on Artiï¬cial Intelligence and Statistics, pp. 315â323, 2011. In
Elad Hazan, Kï¬r Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex opti- mization. In Advances in Neural Information Processing Systems, pp. 1594â1602, 2015. | 1709.04546#44 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 45 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770â778, 2016.
Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448â456, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, University of Toronto, 2009.
Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017a.
Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with restarts. In International Conference on Learning Representations, 2017b. | 1709.04546#45 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 46 | Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with restarts. In International Conference on Learning Representations, 2017b.
Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. In Advances in Neural Information Processing Systems, pp. 1786â1794, 2010.
Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized opti- mization in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2422â2430, 2015.
Anders Oland, Aayush Bansal, Roger B Dannenberg, and Bhiksha Raj. Be careful what you arXiv preprint backpropagate: A case for linear output activations & gradient boosting. arXiv:1707.04199, 2017.
Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent classiï¬er. In Advances in Neural Information Processing Systems, pp. 2294â2302, 2011. | 1709.04546#46 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 47 | Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accel- erate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901â909, 2016.
Samuel L Smith and Quoc V Le. A bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations, 2018.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The In Advances in Neural In- marginal value of adaptive gradient methods in machine learning. formation Processing Systems, 2017.
Neal Wu. A tensorï¬ow implementation of wide residual networks, 2016. URL https:// github.com/tensorflow/models/tree/master/research/resnet. | 1709.04546#47 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.04546 | 48 | Neal Wu. A tensorï¬ow implementation of wide residual networks, 2016. URL https:// github.com/tensorflow/models/tree/master/research/resnet.
Adams Wei Yu, Qihang Lin, Ruslan Salakhutdinov, and Jaime Carbonell. Normalized gradient with adaptive stepsize method for deep neural network training. arXiv preprint arXiv:1707.04822, 2017.
Sergey Zagoruyko and Nikos Komodakis. A pytorch implementation of wide residual networks, 2016a. URL https://github.com/szagoruyko/wide-residual-networks.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016b.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Rep- resentations, 2017. | 1709.04546#48 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | [
{
"id": "1711.05101"
},
{
"id": "1707.04199"
},
{
"id": "1605.07146"
},
{
"id": "1707.04822"
}
] |
1709.02802 | 0 | # Towards Proving the Adversarial Robustness of Deep Neural Networks
Guy Katz, Clark Barrett, David L. Dill, Kyle Julian and Mykel J. Kochenderfer Stanford University
{guyk, clarkbarrett, dill, kjulian3, mykel}@stanford.edu
Autonomous vehicles are highly complex systems, required to function reliably in a wide variety of situations. Manually crafting software controllers for these vehicles is difï¬cult, but there has been some success in using deep neural networks generated using machine-learning. However, deep neural networks are opaque to human engineers, rendering their correctness very difï¬cult to prove manually; and existing automated techniques, which were not designed to operate on neural networks, fail to scale to large systems. This paper focuses on proving the adversarial robustness of deep neural networks, i.e. proving that small perturbations to a correctly-classiï¬ed input to the network cannot cause it to be misclassiï¬ed. We describe some of our recent and ongoing work on verifying the adversarial robustness of networks, and discuss some of the open questions we have encountered and how they might be addressed.
# Introduction | 1709.02802#0 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 1 | [email protected] [email protected]
# Abstract
Common recurrent neural architectures scale poorly due to the intrinsic difï¬culty in par- allelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is de- signed to provide expressive recurrence, en- able highly parallelized implementation, and comes with careful initialization to facili- tate training of deep models. We demon- strate the effectiveness of SRU on multiple SRU achieves 5â9x speed-up NLP tasks. over cuDNN-optimized LSTM on classiï¬ca- tion and question answering datasets, and de- livers stronger results than LSTM and convo- lutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model (Vaswani et al., 2017) on translation by incorporating SRU into the architecture.1
# Introduction | 1709.02755#1 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 1 | # Introduction
Designing software controllers for autonomous vehicles is a difï¬cult and error-prone task. A main cause of this difï¬culty is that, when deployed, autonomous vehicles may encounter a wide variety of situations and are required to perform reliably in each of them. The enormous space of possible situations makes it nearly impossible for a human engineer to anticipate every corner-case.
Recently, deep neural networks (DNNs) have emerged as a way to effectively create complex soft- ware. Like other machine-learning generated systems, DNNs are created by observing a ï¬nite set of input/output examples of the correct behavior of the system in question, and extrapolating from them a software artifact capable of handling previously unseen situations. DNNs have proven remarkably useful in many applications, including including speech recognition [8], image classiï¬cation [14], and game playing [20]. There has also been a surge of interest in using them as controllers in autonomous vehicles such as automobiles [3] and aircraft [12]. | 1709.02802#1 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 2 | # Introduction
Recurrent neural networks (RNN) are at the core of state-of-the-art approaches for a large num- ber of natural language tasks, including machine translation (Cho et al., 2014; Bahdanau et al., 2015; Jean et al., 2015; Luong et al., 2015), lan- guage modeling (Zaremba et al., 2014; Gal and Ghahramani, 2016; Zoph and Le, 2016), opin- ion mining (Irsoy and Cardie, 2014), and situated language understanding (Mei et al., 2016; Misra et al., 2017; Suhr et al., 2018; Suhr and Artzi, 2018). Key to many of these advancements are architectures of increased capacity and computa- tion. For instance, the top-performing models for semantic role labeling and translation use eight re- current layers, requiring days to train (He et al., 2017; Wu et al., 2016b). The scalability of these models has become an important problem that im- pedes NLP research.
1Our code is available at https://github.com/ taolei87/sru. | 1709.02755#2 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 2 | The intended use of DNNs in autonomous vehicles raises many questions regarding the certiï¬cation of such systems. Many of the common practices aimed at increasing software reliability â such as code reviews, refactoring, modular designs and manual proofs of correctness â simply cannot be applied to DNN-based software. Further, existing automated veriï¬cation tools are typically ill-suited to reason about DNNs, and they fail to scale to anything larger than toy examples [18, 19]. Other approaches use various forms of approximation [2, 9] to achieve scalability, but using approximations may not meet the certiï¬cation bar for safety-critical systems. Thus, it is clear that new methodologies and tools for scalable veriï¬cation of DNNs are sorely needed.
We focus here on a speciï¬c kind of desirable property of DNNs, called adversarial robustness. Adversarial robustness measures a networkâs resilience against adversarial inputs [21]: inputs that are produced by taking inputs that are correctly classiï¬ed by the DNN and perturbing them slightly, in a way that causes them to be misclassiï¬ed by the network. For example, for a DNN for image recognition | 1709.02802#2 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 3 | 1Our code is available at https://github.com/ taolei87/sru.
The difï¬culty of scaling recurrent networks arises from the time dependence of state com- putation. In common architectures, such as Long Short-term Memory (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Units (GRU; Cho et al., 2014), the computation of each step is suspended until the complete ex- ecution of the previous step. This sequential de- pendency makes recurrent networks signiï¬cantly slower than other operations, and limits their ap- plicability. For example, recent translation mod- els consist of non-recurrent components only, such as attention and convolution, to scale model train- ing (Gehring et al., 2017; Vaswani et al., 2017).
In this work, we introduce the Simple Recurrent Unit (SRU), a unit with light recurrence that offers both high parallelization and sequence modeling capacity. The design of SRU is inspired by pre- vious efforts, such as Quasi-RNN (QRNN; Brad- bury et al., 2017) and Kernel NN (KNN; Lei et al., 2017), but enjoys additional beneï¬ts: | 1709.02755#3 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 3 | L. Bulwahn, M. Kamali, S. Linker (Eds.): First Workshop on Formal Veriï¬cation of Autonomous Vehicles (FVAV 2017). EPTCS 257, 2017, pp. 19â26, doi:10.4204/EPTCS.257.3
20
TowardsProving Adversarial Robustness ofDeepNeuralNetworks
such examples can correspond to slight distortions in the input image that are invisible to the human eye, but cause the network to assign the image a completely different classiï¬cation. It has been observed that many state-of-the-art DNNs are highly vulnerable to adversarial inputs, and several highly effective techniques have been devised for ï¬nding such inputs [4, 7]. Adversarial attacks can be carried out in the real world [15], and thus constitute a source of concern for autonomous vehicles using DNNs â making it desirable to verify that these DNNs are robust. | 1709.02802#3 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 4 | ⢠SRU exhibits the same level of parallelism as convolution and feed-forward nets. This is achieved by balancing sequential dependence and independence: while the state compu- tation of SRU is time-dependent, each state dimension is independent. This simpliï¬ca- tion enables CUDA-level optimizations that parallelize the computation across hidden di- mensions and time steps, effectively using the full capacity of modern GPUs. Figure 1 com- pares our architectureâs runtimes to common architectures.
⢠SRU replaces the use of convolutions (i.e., n- gram ï¬lters), as in QRNN and KNN, with more recurrent connections. This retains modeling capacity, while using less compu- tation (and hyper-parameters).
1=32,d=256 (k=3) | (k=2) | sru | (0) 2 4 6 1=128,d=512 mmm forward backward (0) 10 20 30 40
# conv2d
# conv2d
Figure 1: Average processing time in milliseconds of a batch of 32 samples using cuDNN LSTM, word- level convolution conv2d (with ï¬lter width k = 2 and k = 3), and the proposed SRU. We vary the number of tokens per sequence (l) and feature dimension (d). | 1709.02755#4 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 4 | In a recent paper [13], we proposed a new decision procedure, called Reluplex, designed to solve systems of linear equations with certain additional, non-linear constraints. In particular, neural networks and various interesting properties thereof can be encoded as input to Reluplex, and the properties can then be proved (or disproved, in which case a counter example is provided). We used Reluplex to verify various properties of a prototype DNN implementation of the next-generation Airborne Collision Avoid- ance Systems (ACAS Xu), which is currently being developed by the Federal Aviation Administration (FAA) [12].
This paper presents some of our ongoing efforts along this line of work, focusing on adversarial robustness properties. We study different kinds of robustness properties and practical considerations for proving them on real-world networks. We also present some initial results on proving these properties for the ACAS Xu networks. Finally, we discuss some of the open questions we have encountered and our plans for addressing them in the future.
The rest of this paper is organized as follows. We brieï¬y provide some needed background on DNNs and on Reluplex in Section 2, followed by a discussion of adversarial robustness in Section 3. We continue with a discussion of our ongoing research and present some initial experimental results in Section 4, and conclude with Section 5.
# 2 Background
# 2.1 Deep Neural Networks | 1709.02802#4 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 5 | ⢠SRU improves the training of deep recur- rent models by employing highway connec- tions (Srivastava et al., 2015) and a parame- ter initialization scheme tailored for gradient propagation in deep architectures.
We evaluate SRU on a broad set of problems, including text classiï¬cation, question answering, translation and character-level language model- ing. Our experiments demonstrate that light re- currence is sufï¬cient for various natural language tasks, offering a good trade-off between scala- bility and representational power. On classiï¬ca- tion and question answering datasets, SRU out- performs common recurrent and non-recurrent ar- chitectures, while achieving 5â9x speed-up com- pared to cuDNN LSTM. Stacking additional lay- ers further improves performance, while incurring relatively small costs owing to the cheap compu- tation of a single layer. We also obtain an average improvement of 0.7 BLEU score on the English to German translation task by incorporating SRU into Transformer (Vaswani et al., 2017).
# 2 Related Work | 1709.02755#5 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 5 | # 2 Background
# 2.1 Deep Neural Networks
Deep neural networks (DNNs) consist of a set of nodes (âneuronsâ), organized in a layered structure. Nodes in the ï¬rst layer are called input nodes, nodes in the last layer are called output nodes, and nodes in the intermediate layers are called hidden nodes. An example appears in Fig. 1 (borrowed from [13]).
Input #1 Output #1 Input #2 Output #2 Input #3 Output #3 Input #4 Output #4 Input #5 Output #5
Figure 1: A DNN with 5 input nodes (in green), 5 output nodes (in red), and 36 hidden nodes (in blue). The network has 6 layers.
Nodes are connected to nodes from the preceding layer by weighted edges, and are each assigned a bias value. An evaluation of the DNN is performed as follows. First, the input nodes are assigned values (these can correspond, e.g., to user inputs or sensor readings). Then, the network is evaluated
G.Katzetal. | 1709.02802#5 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 6 | Improving on common architectures for sequence processing has recently received signiï¬cant atten- tion (Greff et al., 2017; Balduzzi and Ghifary, 2016; Miao et al., 2016; Zoph and Le, 2016; Lee et al., 2017). One area of research involves incor- porating word-level convolutions (i.e. n-gram ï¬l- ters) into recurrent computation (Lei et al., 2015; Bradbury et al., 2017; Lei et al., 2017). For ex- ample, Quasi-RNN (Bradbury et al., 2017) pro- poses to alternate convolutions and a minimal- ist recurrent pooling function and achieves sig- niï¬cant speed-up over LSTM. While Bradbury et al. (2017) focus on the speed advantages of the network, Lei et al. (2017) study the theoretical characteristics of such computation and pos- sible extensions. Their results suggest that sim- pliï¬ed recurrence retains strong modeling capac- ity through layer stacking. This ï¬nding motivates the design of SRU for both high parallelization and representational power. SRU also relates to IRNN (Le et al., 2015), which uses an | 1709.02755#6 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 6 | G.Katzetal.
in each layer the values of the nodes are calculated by (i) computing a weighted sum layer-by-layer: of values from the previous layer, according to the weighted edges; (ii) adding each nodeâs bias value to the weighted sum; and (iii) applying a predetermined activation function to the result of (ii). The value returned by the activation function becomes the value of the node, and this process is propagated layer-by-layer until the networkâs output values are computed.
This work focuses on DNNs using a particular kind of activation function, called a rectiï¬ed linear unit (ReLU). The ReLU function is given by the piecewise linear formula ReLU(x) = max(0, x), i.e., positive values are unchanged and negative values are changed to 0. When applied to a positive value, we say that the ReLU is in the active state; and when applied to a non-positive value, we say that it is in the inactive state. ReLUs are very widely used in practice [14, 16], and it has been suggested that the piecewise linearity that they introduce allows DNNs to generalize well to new inputs [5, 6, 10, 17]. | 1709.02802#6 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02802 | 7 | A DNN N is referred to as a classiï¬er if it is associated with a set of labels L, such that each output node of N corresponds to a speciï¬c output label. For a given input ~x and label â â L, we refer to the value of ââs output node as the conï¬dence of N that ~x is labeled â, and denote this value by C(N,~x, â). An input ~x is said to be classiï¬ed to label â â L, denoted N(~x) = â, if C(N,~x, â) > C(N,~x, ââ²) for all ââ² 6= â.
# 2.2 Verifying Properties of Neural Networks | 1709.02802#7 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 8 | Various strategies have been proposed to scale network training (Goyal et al., 2017) and to speed up recurrent networks (Diamos et al., 2016; Shazeer et al., 2017; Kuchaiev and Ginsburg, 2017). For instance, Diamos et al. (2016) utilize hardware infrastructures by stashing RNN param- eters on cache (or fast memory). Shazeer et al. (2017) and Kuchaiev and Ginsburg (2017) im- prove the computation via conditional computing and matrix factorization respectively. Our imple- mentation for SRU is inspired by the cuDNN- optimized LSTM (Appleyard et al., 2016), but en- ables more parallelism â while cuDNN LSTM re- quires six optimization steps, SRU achieves more signiï¬cant speed-up via two optimizations. | 1709.02755#8 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 8 | # 2.2 Verifying Properties of Neural Networks
A DNN can be regarded as a collection of linear equations, with the additional ReLU constraints. Ex- isting veriï¬cation tools capable of handling these kinds of constraints include linear programming (LP) solvers and satisï¬ability modulo theories (SMT) solvers, and indeed past research has focused on using these tools [2, 9, 18, 19]. As for the properties being veriï¬ed, we restrict our attention to properties that can be expressed as linear constraints over the DNNâs input and output nodes. Many properties of interest seem to fall into this category, including adversarial robustness [13]. | 1709.02802#8 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 9 | The design of recurrent networks, such as SRU and related architectures, raises questions about representational power and interpretability (Chen et al., 2018; Peng et al., 2018). Balduzzi and Ghi- fary (2016) applies type-preserving transforma- tions to discuss the capacity of various simpliï¬ed RNN architectures. Recent work (Anselmi et al., 2015; Daniely et al., 2016; Zhang et al., 2016; Lei et al., 2017) relates the capacity of neural networks to deep kernels. We empirically demonstrate SRU can achieve compelling results by stacking multi- ple layers.
# 3 Simple Recurrent Unit
We present and explain the design of Simple Re- current Unit (SRU) in this section. A single layer of SRU involves the following computation:
f, = o (W Fx, +ve © C1 + be) f,O cq_1 + (1â£,) © (Wx) Ce
(1) = (2)
r, = 0 (W,x, + vr © G1 +b,) hy = OG +(1-m) OX
(3)
(4) | 1709.02755#9 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 9 | Unfortunately, this veriï¬cation problem is NP-complete [13], making it theoretically difï¬cult. It is also difï¬cult in practice, with modern solvers scaling only to very small examples [18, 19]. Because prob- lems involving only linear constraints are fairly easy to solve, many solvers handle the ReLU constraints by transforming the input query into a sequence of pure linear sub-problems, such that the original query is satisï¬able if and only if at least one of the sub-problems is satisï¬able. This transformation is performed by case-splitting: given a query involving n ReLU constraints, the linear sub-problems are obtained by ï¬xing each of the ReLU constraints in either the active or inactive state (recall that ReLU constraints are piecewise linear). Unfortunately, this entails exploring every possible combination of active/inactive ReLU states, meaning that the solver needs to check 2n linear sub-problems in the worst case. This quickly becomes a crucial bottleneck when n increases. | 1709.02802#9 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 10 | r, = 0 (W,x, + vr © G1 +b,) hy = OG +(1-m) OX
(3)
(4)
where W, Wf and Wr are parameter matrices and vf , vr, bf and bv are parameter vectors to be learnt during training. The complete architec- ture decomposes to two sub-components: a light recurrence (Equation 1 and 2) and a highway net- work (Equation 3 and 4).
The light recurrence component successively reads the input vectors xt and computes the se- quence of states ct capturing sequential informa- tion. The computation resembles other recurrent networks such as LSTM, GRU and RAN (Lee et al., 2017). Speciï¬cally, a forget gate ft controls the information ï¬ow (Equation 1) and the state vector ct is determined by adaptively averaging the previous state ctâ1 and the current observation Wxt according to ft (Equation 2). | 1709.02755#10 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 10 | In a recent paper, we proposed a new algorithm, called Reluplex, capable of verifying DNNs that are an order-of-magnitude larger than was previously possible [13]. The key insight that led to this improved scalability was a lazy treatment of the ReLU constraints: instead of exploring all possible combinations of ReLU activity or inactivity, Reluplex temporarily ignores the ReLU constraints and attempts to solve just the linear portion of the problem. Then, by deriving variable bounds from the linear equations that it explores, Reluplex is often able to deduce that some of the ReLU constraints are ï¬xed in either the active or inactive case, which greatly reduces the amount of case-splitting that it later needs to perform. This has allowed us to use Reluplex to verify various properties of the DNN-based implementation of the ACAS Xu system: a family of 45 DNNs, each with 300 ReLU nodes.
21
22
TowardsProving Adversarial Robustness ofDeepNeuralNetworks
# 3 Adversarial Robustness | 1709.02802#10 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 11 | One key design decision that differs from previ- ous gated recurrent architectures is the way cy; is used in the sigmoid gate. Typically, cy; is multiplied with a parameter matrix to compute f;, e.g., fp = o(Wyx; + Vecx_1 + by). However, the inclusion of Vfc;â1 makes it difficult to par- allelize the state computation: each dimension of c; and f; depends on all entries of c;_;, and the computation has to wait until c;_; is fully com- puted. To facilitate parallelization, our light recur- rence component uses a point-wise multiplication vy © C-1 instead. With this simplification, each dimension of the state vectors becomes indepen- dent and hence parallelizable.
The highway network component (Srivastava et al., 2015) facilitates gradient-based training of deep networks. It uses the reset gate r, (Equation 3) to adaptively combine the input x; and the state c; produced from the light recurrence (Equation 4), where (1 â r;) © x; is a skip connection that allows the gradient to directly propagate to the pre- vious layer. Such connections have been shown to improve scalability (Wu et al., 2016a; Kim et al., 2016; He et al., 2016; Zilly et al., 2017). | 1709.02755#11 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 11 | 21
22
TowardsProving Adversarial Robustness ofDeepNeuralNetworks
# 3 Adversarial Robustness
A key challenge in software veriï¬cation, and in particular in DNN veriï¬cation, is obtaining a speciï¬- cation against which the software can be veriï¬ed. One solution is to manually develop such properties on a per-system basis, but we can also focus on properties that are desirable for every network. Adver- sarial robustness properties fall into this category: they express the requirement that the network behave smoothly, i.e. that small input perturbations should not cause major spikes in the networkâs output. Be- cause DNNs are trained over a ï¬nite set of inputs/outputs, this captures our desire to ensure that the network behaves âwellâ on inputs that were neither tested nor trained on. If adversarial robustness is determined to be too low in certain parts of the input space, the DNN may be retrained to increase its robustness [7].
We begin with a common deï¬nition for local adversarial robustness [2, 9, 13]:
Deï¬nition 1 A DNN N is δ-locally-robust at point ~x0 iff
# We. | 1709.02802#11 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 12 | The combination of the two components makes the overall architecture simple yet expressive, and easy to scale due to enhanced parallelization and gradient propagation.
# 3.1 Parallelized Implementation
Despite the parallelization friendly design of SRU, a naive implementation which computes equations (1)â(4) for each step t sequentially would not achieve SRUâs full potential. We employ two op- timizations to enhance parallelism. The optimiza- tions are performed in the context of GPU / CUDA programming, but the general idea can be applied to other parallel programming models.
We re-organize the computation of equations (1)â(4) into two major steps. First, given the input sequence {x1 · · · xL}, we batch the matrix multi- plications across all time steps. This signiï¬cantly improves the computation intensity (e.g. GPU uti- lization). The batched multiplication is:
Ww Wy; WwW, Ul= [x1,X2,°°+ xz] ,
where L is the sequence length, U â RLÃ3d is the computed matrix and d is the hidden state size. When the input is a mini-batch of B sequences, U would be a tensor of size (L, B, 3d). | 1709.02755#12 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 12 | Deï¬nition 1 A DNN N is δ-locally-robust at point ~x0 iff
# We.
k~x â ~x0k ⤠δ â N(~x) = N(~x0)
Intuitively, Deï¬nition 1 states that for input ~x that is very close to ~x0, the network assigns to ~x the same label that it assigns to ~x0; âlocalâ thus refers to a local neighborhood around ~x0. Larger values of δ imply larger neighborhoods, and hence better robustness. Consider, for instance, a DNN for image recognition: δ-local-robustness can then capture the fact that slight perturbations of the input image, i.e. perturbations so small that a human observer would fail to detect them, should not result in a change of label. | 1709.02802#12 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 13 | The second step computes the remaining point- wise operations. Speciï¬cally, we compile all point-wise operations into a single fused CUDA kernel and parallelize the computation across each dimension of the hidden state. Algorithm 1 shows the pseudo code of the forward function. The com- plexity of this step is O(L · B · d) per layer, where L is the sequence length and B is the batch size. In contrast, the complexity of LSTM is O(L · B · d2) because of the hidden-to-hidden multiplications (e.g. Vhtâ1), and each dimension can not be in- dependently parallelized. The fused kernel also reduces overhead. Without it, operations such as sigmoid activation would each invoke a separate function call, adding kernel launching latency and more data moving costs.
The implementation of a bidirectional SRU is similar: the matrix multiplications of both direc- tions are batched, and the fused kernel handles and parallelizes both directions at the same time.
# 3.2 Initialization
Proper parameter initialization can reduce gradient propagation difï¬culties and hence have a positive | 1709.02755#13 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 13 | There appear to be two main drawbacks to using Deï¬nition 1: (i) The property is checked for in- dividual input points in an inï¬nite input space, and it does not necessarily carry over to other points that are not checked. This issue may be partially mitigated by testing points drawn from some random distribution thought to represent the input space. (ii) For each point ~x0 we need to specify the minimal acceptable value of δ. Clearly, these values can vary between different input points: for example, a point deep within a region that is expected to be labeled â1 should have high robustness, whereas for a point closer to the boundary between two labels â1 and â2 even a tiny δ may be acceptable. We note that given a point ~x0 and a solver such as Reluplex, one can perform a binary search and ï¬nd the largest δ for which N is δ-locally-robust at ~x0 (up to a desired precision).
In order to overcome the need to specify each individual δ separately, in [13] we proposed an alternative approach, using the notion of global robustness:
Deï¬nition 2 A DNN N is (δ, ε)-globally-robust in input region D iff | 1709.02802#13 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 14 | # 3.2 Initialization
Proper parameter initialization can reduce gradient propagation difï¬culties and hence have a positive
Equations (1)-(4). state dimension d. multiplication U[I, i, 7â]; b,[j]. Parallelize each example 7 and dimension j 5 layers * no scaling © with scaling 20 layers 0.75 oth 0.58 0.42 0.25 0 250 500 750 1000
Algorithm 1 Mini-batch version of the forward pass deï¬ned in Equations (1)â(4). | 1709.02755#14 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 15 | Algorithm 1 Mini-batch version of the forward pass deï¬ned in Equations (1)â(4).
Indices: Sequence length L, mini-batch size B, hidden state dimension d. Input: Input sequences batch x{I, i, j]; grouped matrix multiplication U[I, i, 7â]; initial state co[?, j]; parameters v [J], v,[J], by[j] and b,[j]. Output: Output h[-, -,-] and internal c[-, -, -] states. Initialize h[-,-,-] and c[-,-,-] as two L x B x d tensors. fori =1,---,B;j=1,--- c= Cofi, j] for] =1,--- ,Ldo ,d do f=o(U[l,i,j+d)+vy[j] x e+ by[j]) c=fxc+(1âf) x Ufl,i,j] // Parallelize each example 7 and dimension j r=o(U[l,i,j+dx 2] +v,[j] x c+b,[j]) h=rxct+(lâr) x x{l,i, 3]
impact on the ï¬nal performance. We now describe an initialization strategy tailored for SRU. | 1709.02755#15 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 15 | Vx, ED. || -âM|I|) <6 = VEEL. |C(N,%,0)-âC(N,,2)| <â¬
Deï¬nition 2 addresses the two shortcomings of Deï¬nition 1. First, it considers an input domain D instead of a speciï¬c point ~x0, allowing it to cover inï¬nitely many points (or even the entire input space) in a single query, with δ and ε deï¬ned once for the entire domain. Also, it is better suited for handling input points that lay on the boundary between two labels: this deï¬nition now only requires that two δ-adjacent points are classiï¬ed in a similar (instead of identical) way, in the sense that there are no spikes greater than ε in the levels of conï¬dence that the network assigns to each label for these points. Here it is desirable to have a large δ (for large neighborhoods) and a small ε (for small spikes), although it is expected that the two parameters will be mutually dependent.
Unfortunately, global robustness appears to be signiï¬cantly harder to check, as we discuss next.
G.Katzetal.
# 3.1 Verifying Robustness using Reluplex | 1709.02802#15 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 16 | impact on the ï¬nal performance. We now describe an initialization strategy tailored for SRU.
We start by adopting common initializations de- rived for feed-forward networks (Glorot and Ben- gio, 2010; He et al., 2015). The weights of param- eter matrices are drawn with zero mean and 1/d variance, for instance, via the uniform distribution [-\/3/d, +./3/d]. This ensures the output vari- ance remains approximately the same as the input variance after the matrix multiplication.
the light recurrence and highway computation would still reduce the variance of hidden representations by a factor of 1/3 to 1/2:
1 3 Var[ht] Var[xt] 1 2 ⤠⤠,
Figure 2: Training curves of SRU on classiï¬cation. The x-axis is the number of training steps and the y-axis is the training loss. Scaling correction im- proves the training progress, especially for deeper models with many stacked layers.
and the factor converges to 1/2 in deeper layers (see Appendix A). This implies the output ht and the gradient would vanish in deep models. To off- set the problem, we introduce a scaling correction constant α in the highway connection
# 4 Experiments
h, = r,-Oc¢ + (1-r)OxX-a ,
â | 1709.02755#16 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 16 | G.Katzetal.
# 3.1 Verifying Robustness using Reluplex
Provided that the distance metrics in use can be expressed as a combination of linear constraints and ReLU operators (L1 and Lâ fall into this category), δ-local-robustness and (δ, ε)-global-robustness prop- erties can be encoded as Reluplex inputs. For the local robustness case, the input constraint k~x â ~x0k ⤠δ is encoded directly as a set of linear equations and variable bounds, and the robustness property is negated and encoded as
_ â6=N(~x0) N(~x) = â
Thus, if Reluplex ï¬nds a variable assignment that satisï¬es the query, this assignment constitutes a counter-example ~x that violates the property, i.e., ~x is δ-close to ~x0 but has a label different from that of ~x0. If Reluplex discovers that the query is unsatisï¬able, then the network is guaranteed to be δ-local- robust at ~x0. | 1709.02802#16 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 17 | # 4 Experiments
h, = r,-Oc¢ + (1-r)OxX-a ,
â
We evaluate SRU on several natural language pro- cessing tasks and perform additional analyses of the model. The set of tasks includes text classiï¬ca- tion, question answering, machine translation, and character-level language modeling. Training time on these benchmarks ranges from minutes (classi- ï¬cation) to days (translation), providing a variety of computation challenges.
where α is set to 3 such that Var[ht] â Var[xt] at initialization. When the highway network is ini- tialized with a non-zero bias br = b, the scaling constant α can be accordingly set as:
# a = Vlt+exp(b) x2.
Figure 2 compares the training progress with and without the scaling correction. See Appendix A for the derivation and more discussion.
The main question we study is the performance- speed trade-off SRU provides in comparison to | 1709.02755#17 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 17 | Encoding (δ, ε)-global-robustness is more difï¬cult because neither ~x1 nor ~x2 is ï¬xed. It is performed by encoding two copies of the network, denoted N1 and N2, such that ~x1 is the input to N1 and ~x2 is the input to N2. We again encode the constraint k~x1 â ~x2k ⤠δ as a set of linear equations, and the robustness property is negated and encoded as
_ ââL |C(N1, ~x1, â) âC(N2, ~x2, â)| ⥠ε
As before, if the query is unsatisï¬able then the property holds, whereas a satisfying assignment consti- tutes a counter-example. | 1709.02802#17 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 18 | Model Size CR SUBJ MR TREC MPQA SST Best reported results: Wang and Manning (2013) Kalchbrenner et al. (2014) Kim (2014) Zhang and Wallace (2017) Zhao et al. (2015) 82.1 - 85.0 84.7 86.3 93.6 - 93.4 93.7 95.5 79.1 - 81.5 81.7 83.1 - 93.0 93.6 91.6 92.4 86.3 - 89.6 89.6 93.3 - 86.8 88.1 85.5 - Our setup (default Adam, ï¬xed word embeddings): 360k CNN 352k LSTM QRNN (k=1) 165k QRNN (k=1) + highway 204k 83.1±1.6 82.7±1.9 83.5±1.9 84.0±1.9 92.7±0.9 92.6±0.8 93.4±0.6 93.4±0.8 78.9±1.3 79.8±1.3 82.0±1.0 82.1±1.2 93.2±0.8 93.4±0.9 92.5±0.5 93.2±0.6 89.2±0.8 | 1709.02755#18 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 18 | As before, if the query is unsatisï¬able then the property holds, whereas a satisfying assignment consti- tutes a counter-example.
While both kinds of queries can be encoded in Reluplex, global robustness is signiï¬cantly harder to prove than its local counterpart. The main reason is the technique mentioned in Section 3.1, which allows Reluplex to achieve scalability by determining that certain ReLU constraints are ï¬xed at either the active or inactive state. When checking local robustness, the networkâs input nodes are restricted to a small neighborhood around ~x0, and this allows Reluplex to discover that many ReLU constraints are ï¬xed; whereas the larger domain D used for global robustness queries tends to allow fewer ReLUs to be eliminated, which entails additional case-splitting and slows Reluplex down. Also, as previously explained, encoding a global-robustness property entails encoding two identical copies of the DNN in question. This doubles the number of variables and ReLUs that Reluplex needs to handle, leading to slower performance. Consequently, our implementation of Reluplex can currently verify the local ad- versarial robustness of DNNs with several hundred nodes, whereas global robustness is limited to DNNs with a few dozen nodes.
# 4 Moving Forward | 1709.02802#18 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 19 | 93.2±0.8 93.4±0.9 92.5±0.5 93.2±0.6 89.2±0.8 89.4±0.7 90.2±0.7 89.6±1.2 85.1±0.6 88.1±0.8 88.2±0.4 88.9±0.2 SRU (2 layers) SRU (4 layers) SRU (8 layers) 204k 303k 502k 84.9±1.6 85.9±1.5 86.4±1.7 93.5±0.6 93.8±0.6 93.7±0.6 82.3±1.2 82.9±1.0 83.1±1.0 94.0±0.5 94.8±0.5 94.7±0.5 90.1±0.7 90.1±0.6 90.2±0.8 89.2±0.3 89.6±0.5 88.9±0.6 Time - - - - - 417 2409 345 371 320 510 879 | 1709.02755#19 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 19 | # 4 Moving Forward
A signiï¬cant question in moving forward is on which deï¬nition of adversarial robustness to focus. The advantages of using (δ, ε)-global-robustness are clear, but the present state-of-the-art seems insufï¬cient for verifying it; whereas δ-local-robust is more feasible but requires a high degree of manual ï¬ne tuning. We suggest to focus for now on the following hybrid deï¬nition, which is an enhanced version of local robustness: Deï¬nition 3 A DNN N is (δ, ε)-locally-robust at point ~x0 iff
â~x. k~x â ~x0k ⤠δ â ââ â L.
|C(N,~x, â) âC(N,~x0, â)| < ε
VX. ||XÂ¥-xXo|] <6 => VEEL. |C(N,X,2)âC(N,30,0| <eâ¬
23
24
TowardsProving Adversarial Robustness ofDeepNeuralNetworks | 1709.02802#19 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 20 | Table 1: Test accuracies on classiï¬cation benchmarks (Section 4.1). The ï¬rst block presents best reported results of various methods. The second block compares SRU and other baselines given the same setup. For the SST dataset, we report average results of 5 runs. For other datasets, we perform 3 independent trials of 10-fold cross validation (3Ã10 runs). The last column compares the wall clock time (in seconds) to ï¬nish 100 epochs on the SST dataset.
other architectures. We stack multiple layers of SRU to directly substitute other recurrent, convo- lutional or feed-forward modules. We minimize hyper-parameter tuning and architecture engineer- ing for a fair comparison. Such efforts have a non- trivial impact on the results, which are beyond the scope of our experiments. Unless noted otherwise, the hyperparameters are set identical to prior work.
# 4.1 Text Classiï¬cation | 1709.02755#20 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 21 | # 4.1 Text Classiï¬cation
Dataset We use six sentence classiï¬cation benchmarks: movie review sentiment (MR; Pang and Lee, 2005), sentence subjectivity (SUBJ; Pang and Lee, 2004), customer reviews polar- ity (CR; Hu and Liu, 2004), question type (TREC; Li and Roth, 2002), opinion polarity (MPQA; Wiebe et al., 2005), and the Stanford sentiment treebank (SST; Socher et al., 2013).2
Following Kim (2014), we use word2vec em- beddings trained on 100 billion Google News to- kens. For simplicity, all word vectors are normal- ized to unit vectors and are ï¬xed during training.
Setup We stack multiple SRU layers and use the last output state to predict the class label for a given sentence. We train for 100 epochs and use the validation (i.e., development) set to se- lect the best training epoch. We perform 10-fold
cross validation for datasets that do not have a standard train-evaluation split. The result on SST is averaged over ï¬ve independent trials. We use Adam (Kingma and Ba, 2014) with the default learning rate 0.001, a weight decay 0 and a hid- den dimension of 128. | 1709.02755#21 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 21 | _ ââL |C(N,~x, â) âC(N, ~x0, â)| ⥠ε
Deï¬nition 3 is still local in nature, which means that testing it using Reluplex does not require encoding two copies of the network. It also allows ReLU elimination, which affords some scalability (see Table 1 for some initial results). Finally, this deï¬nitionâs notion of robustness is based on the difference in conï¬dence levels, as opposed to a different labeling, making it more easily applicable to any input point, even if it is close to a boundary between two labels. Thus, we believe it is superior to Deï¬nition 1. An open problem is how to determine the ï¬nite set of points to be tested, and the δ and ε values to test. (Note that it may be possible to use the same δ and ε values for all points tested, reducing the amount of manual work required.) | 1709.02802#21 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 22 | We compare SRU with a wide range of meth- ods on these datasets, including various convo- lutional models (Kalchbrenner et al., 2014; Kim, 2014; Zhang and Wallace, 2017) and a hierarchical sentence model (Zhao et al., 2015) reported as the state of the art on these datasets (Conneau et al., 2017). Their setups are not exactly the same as ours, and may involve more tuning on word em- beddings and other regularizations. We use the setup of Kim (2014) but do not ï¬ne-tune word embeddings and the learning method for simplic- ity. In addition, we directly compare against three baselines trained using our code base: a re- implementation of the CNN model of Kim (2014), a two-layer LSTM model and Quasi-RNN (Brad- bury et al., 2017). We use the ofï¬cial implemen- tation of Quasi-RNN and also implement a ver- sion with highway connection for a fair compar- ison. These baselines are trained using the same hyper-parameter conï¬guration as SRU.
2We use the binary version of SST dataset. | 1709.02755#22 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 22 | Another important challenge in moving forward is scalability. Currently, Reluplex is able to handle DNNs with several hundred nodes, but many real-world DNNs are much larger than that. Apart from improving the Reluplex heuristics and implementation, we believe that parallelization will play a key role here. Veriï¬cation of robustness properties, both local and global, naturally lends itself to paral- lelization. In the local case, testing the robustness of n input points can be performed simultaneously using n machines; and even in the global case, an input domain D can be partitioned into n sub domains D1, . . . , Dn, each of which can be tested separately. The experiment described in Table 1 demonstrates the beneï¬ts of parallelizing (δ, ε)-local-robustness testing even further: apart from testing each point on a separate machine, for each point the disjuncts in the encoding of Deï¬nition 3 can also be checked in parallel. The improvement in performance is evident, emphasizing the potential beneï¬ts of pursuing this direction further.
We believe parallelization can be made even more efï¬cient in this context by means of two comple- mentary directions: | 1709.02802#22 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 23 | 2We use the binary version of SST dataset.
Results Table 1 compares the test results on the six benchmarks. We select the best number reCR 90 98 96 94 92 90 88 85 80 SUB} MR 0 20 40 60 80 100 120 0 50 100 150 200 250 300 0 50 100 150 200 250 300 TREC MPQA ssT 6 94 94 5 92 90 90 90 95 88 86 80 88 â cuDNNLSTM 84 86 â sRu 75 82 â CNN 4 80 0 20 40 60 80 100 120 0 25 50 75 100 125 150 0 500 1000 1500 2000
Figure 3: Mean validation accuracies (y-axis) and standard deviations of the CNN, 2-layer LSTM and 2-layer SRU models. We plot the curves of the ï¬rst 100 epochs. X-axis is the training time used (in seconds). Timings are performed on NVIDIA GeForce GTX 1070 GPU, Intel Core i7-7700K Processor and cuDNN 7003. | 1709.02755#23 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 23 | We believe parallelization can be made even more efï¬cient in this context by means of two comple- mentary directions:
1. Prioritization. When testing the (local or global) robustness of a DNN, we can stop immediately once a violation has been found. Thus, prioritizing the points or input domains and starting from those in which a violation is most likely to occur could serve to reduce execution time. Such prioritization could be made possible by numerically analyzing the network prior to veriï¬cation, identifying input regions in which there are steeper ï¬uctuations in the output values, and focusing on these regions ï¬rst.
2. Information sharing across nodes. As previously mentioned, a key aspect of the scalability of Reluplex is its ability to determine that certain ReLU constraints are ï¬xed in either the active or inactive case. When running multiple experiments, these conclusions could potentially be shared between executions, improving performance. Of course, great care will need to be taken, as a ReLU that is ï¬xed in one input domain may not be ï¬xed (or may even be ï¬xed in the other state) in another domain. | 1709.02802#23 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 24 | ported in previous methods when multiple model variants were explored in their experiments. De- spite our simple setup, SRU outperforms most pre- vious methods and achieves comparable results compared to the state-of-the-art but more sophisti- cated model of Zhao et al. (2015). Figure 3 shows validation performance relative to training time for SRU, cuDNN LSTM and the CNN model. Our SRU implementation runs 5â9 times faster than cuDNN LSTM, and 6â40% faster than the CNN model of Kim (2014). On the movie review (MR) dataset for instance, SRU completes 100 training epochs within 40 seconds, while LSTM takes over 320 seconds.
We use the open source implementation of Doc- ument Reader in our experiments.4 We train mod- els for up to 100 epochs, with a batch size of 32 and a hidden dimension of 128. Following the author suggestions, we use the Adamax op- timizer (Kingma and Ba, 2014) and variational dropout (Gal and Ghahramani, 2016) during train- ing. We compare with two alternative recurrent components: the bidirectional LSTM adopted in the original implementation of Chen et al. (2017) and Quasi-RNN with highway connections for im- proved performance.
# 4.2 Question Answering | 1709.02755#24 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 24 | Finally, we believe it would be important to come up with automatic techniques for choosing the input points (in the local case) or domains (in the global case) to be tested, and the corresponding δ and ε parameters. These techniques would likely take into account the distribution of the inputs in the networkâs training set. In the global case, domain selection could be performed in a way that would optimize the veriï¬cation process, by selecting domains in which ReLU constraints are ï¬xed in the active or inactive state.
G.Katzetal.
Table 1: Checking the (δ, ε)-local-robustness of one of the ACAS Xu DNNs [13] at 5 arbitrary input points, for different values of ε (we ï¬xed δ = 0.018 for all experiments). The Seq. columns indicate execution time (in seconds) for a sequential execution, and the Par. columns indicate execution time (in seconds) for a parallelized execution using 5 machines. | 1709.02802#24 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 25 | # 4.2 Question Answering
Dataset We use the Stanford Question Answer- ing Dataset (SQuAD; Rajpurkar et al., 2016). SQuAD is a large machine comprehension dataset that includes over 100K question-answer pairs ex- tracted from Wikipedia articles. We use the stan- dard train and development sets.
Setup We use the Document Reader model of Chen et al. (2017) as our base architecture for this task. The model is a combination of word- level bidirectional RNNs and attentions, providing a good testbed to compare our bidirectional SRU implementation with other RNN components.3
3The current state-of-the-art models (Seo et al., 2016; Wang et al., 2017) make use of additional components such | 1709.02755#25 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 25 | Point ε = 0.01 Par. Robust? Seq. ε = 0.02 Par. Robust? Seq. ε = 0.03 Par. Robust? Seq. 1 2 3 4 5 No Yes Yes No Yes 5 277 103 17 333 5 1272 460 17 1479 No Yes Yes Yes Yes 785 248 134 249 259 7548 989 480 774 1115 Yes Yes Yes Yes Yes 9145 191 93 132 230 38161 747 400 512 934
# 5 Conclusion
The planned inclusion of DNNs within autonomous vehicle controllers poses a signiï¬cant challenge for their certiï¬cation. In particular, it is becoming increasingly important to show that these DNNs are robust to adversarial inputs. This challenge can be addressed through veriï¬cation, but the scalability of state-of- the-art techniques is a limiting factor and dedicated techniques and methodologies need to be developed for this purpose.
In [13] we presented the Reluplex algorithm which is capable of proving DNN robustness in some cases. Still, additional work is required to improve scalability. We believe that by carefully phrasing the properties being proved, and by intelligently applying parallelization, a signiï¬cant improvement can be achieved. | 1709.02802#25 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 26 | 3The current state-of-the-art models (Seo et al., 2016; Wang et al., 2017) make use of additional components such
Results Table 2 summarizes the results on SQuAD. SRU achieves 71.4% exact match and 80.2% F1 score, outperforming the bidirectional LSTM model by 1.9% (EM) and 1.4% (F1) re- spectively. SRU also exhibits over 5x speed-up over LSTM and 53â63% reduction in total train- ing time. In comparison with QRNN, SRU ob- tains 0.8% improvement on exact match and 0.6% on F1 score, and runs 60% faster. This speed im- provement highlights the impact of the fused ker- nel (Algorithm 1). While the QRNN baseline in- volves a similar amount of computation, assem- bling all element-wise operations of both direcas character-level embeddings, which are not directly com- parable to the setup of Chen et al. (2017). However, these models can potentially beneï¬t from SRU since RNNs are in- corporated in the model architecture.
# 4https://github.com/hitvoice/DrQA | 1709.02755#26 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 26 | As a long-term goal, we speculate that this line of work could assist researchers in verifying the dynamics of autonomous vehicle systems that include a DNN-based controller. In particular, it may be possible to ï¬rst formally prove that a DNN-based controller satisï¬es certain properties, and then use these properties in analyzing the dynamics of the system as a whole. Speciï¬cally, we plan to explore the integration of Reluplex with reachability analysis techniques, for both the ofï¬ine [11] and online [1] variants.
Acknowledgements. We thank Neal Suchy from the FAA and Lindsey Kuper from Intel for their valu- able comments and support. This work was partially supported by grants from the FAA and Intel.
# References
[1] M. Althoff & J. Dolan (2014): Online Veriï¬cation of Automated Road Vehicles using Reachability Analysis. IEEETransactionsonRobotics 30, pp. 903â918, doi:10.1109/TRO.2014.2312453. | 1709.02802#26 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 27 | # 4https://github.com/hitvoice/DrQA
Model # layers Size Dev EM Dev F1 Time per epoch Total RNN LSTM (Chen et al., 2017) 3 4.1m 69.5 78.8 316s 431s QRNN (k=1) + highway 4 6 2.4m 3.2m 70.1 ± 0.1 70.6 ± 0.1 79.4 ± 0.1 79.6 ± 0.2 113s 161s 214s 262s SRU SRU SRU 3 4 6 2.0m 2.4m 3.2m 70.2 ± 0.3 70.7 ± 0.1 71.4 ± 0.1 79.3 ± 0.1 79.7 ± 0.1 80.2 ± 0.1 58s 72s 100s 159s 173s 201s
Table 2: Exact match (EM) and F1 scores of various models on SQuAD (Section 4.2). We also report the total processing time per epoch and the time spent in RNN computations. SRU outperforms other models, and is more than ï¬ve times faster than cuDNN LSTM.
tions in SRU achieves better GPU utilization.
# 4.3 Machine Translation | 1709.02755#27 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 27 | [2] O Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori & A. Criminisi (2016): Measuring Neural Net Robustness with Constraints. In: Proc.30thConf.onNeuralInformationProcessingSystems(NIPS).
[3] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao & K. Zieba (2016): End to End Learning for Self-Driving Cars. Technical Report. http://arxiv.org/abs/1604.07316.
[4] N. Carlini & D. Wagner (2017): Towards Evaluating the Robustness of Neural Networks. In: Proc. 38th SymposiumonSecurityandPrivacy(SP), doi:10.1109/SP.2017.49.
[5] X. Glorot, A. Bordes & Y. Bengio (2011): Deep Sparse Rectiï¬er Neural Networks. In: Proc.14thInt.Conf. onArtiï¬cialIntelligenceandStatistics(AISTATS), pp. 315â323.
25
26 | 1709.02802#27 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 28 | tions in SRU achieves better GPU utilization.
# 4.3 Machine Translation
Dataset We train translation models on the WMT EnglishâGerman dataset, a standard benchmark for translation systems (Peitz et al., 2014; Li et al., 2014; Jean et al., 2015). The dataset consists of 4.5 million sentence pairs. We obtain the pre-tokenized dataset from the Open- NMT project (Klein et al., 2017). The sentences were tokenized using the word-piece model (Wu et al., 2016b), which generates a shared vocabu- lary of about 32,000 tokens. Newstest-2014 and newstest-2017 are provided and used as the vali- dation and test sets.5
Setup We use the state-of-the-art Transformer model of Vaswani et al. (2017) as our base archi- tecture. In the base model, a single Transformer consists of a multi-head attention layer and a bot- tleneck feed-forward layer. We substitute the feed- forward network using our SRU implementation:
base: W · ReLU_layer(x) + b ours: W · SRU_layer(x) + b .
The intuition is that SRU can better capture se- quential information as a recurrent network, and potentially achieve better performance while re- quiring fewer layers. | 1709.02755#28 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 28 | 25
26
TowardsProving Adversarial Robustness ofDeepNeuralNetworks
[6] I. Goodfellow, Y. Bengio & A. Courville (2016): Deep Learning. MIT Press. [7] I. Goodfellow, J. Shlens & C. Szegedy (2014): Explaining and Harnessing Adversarial Examples. Technical
Report. http://arxiv.org/abs/1412.6572.
[8] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath & B. Kingsbury (2012): Deep Neural Networks for Acoustic Modeling in Speech Recogni- tion: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine 29(6), pp. 82â97, doi:10.1109/MSP.2012.2205597.
[9] X. Huang, M. Kwiatkowska, S. Wang & M. Wu (2016): Safety Veriï¬cation of Deep Neural Networks. Tech- nical Report. http://arxiv.org/abs/1610.06940. | 1709.02802#28 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 29 | The intuition is that SRU can better capture se- quential information as a recurrent network, and potentially achieve better performance while re- quiring fewer layers.
We keep the model conï¬guration the same as Vaswani et al. (2017): the model dimension is dmodel = 512, the feed-forward and SRU layer has inner dimensionality dff = dsru = 2048, and posi- tional encoding (Gehring et al., 2017) is applied on
the input word embeddings. The base model with- out SRU has 6 layers, while we set the number of layers to 4 and 5 when SRU is added. Following the original setup, we use a dropout probability 0.1 for all components, except the SRU in the 5-layer model, for which we use a dropout of 0.2 as we observe stronger over-ï¬tting in training. | 1709.02755#29 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 29 | [10] K. Jarrett, K. Kavukcuoglu & Y. LeCun (2009): What is the Best Multi-Stage Architecture for Ob- In: Proc. 12th IEEE Int. Conf. on Computer Vision (ICCV), pp. 2146â2153, ject Recognition? doi:10.1109/ICCV.2009.5459469.
[11] J.-B. Jeannin, K Ghorbal, Y. Kouskoulas, R. Gardner, A. Schmidt, E. Zawadzki & A. Platzer (2015): A Formally Veriï¬ed Hybrid System for the Next-Generation Airborne Collision Avoidance System. In: Proc. 21stInt.Conf.onToolsandAlgorithmsfortheConstructionandAnalysisofSystems(TACAS), pp. 21â36, doi:10.1007/978-3-662-46681-0 2. | 1709.02802#29 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 30 | We use a single NVIDIA Tesla V100 GPU for each model. The published results were obtained using 8 GPUs in parallel, which provide a large ef- fective batch size during training. To approximate the setup, we update the model parameters ev- ery 5Ã5120 tokens and use 16,000 warm-up steps following OpenNMT suggestions. We train each model for 40 epochs (250,000 steps), and perform 3 independent trials for each model conï¬guration. A single run takes about 3.5 days with a Tesla V100 GPU. | 1709.02755#30 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 30 | [12] K. Julian, J. Lopez, J. Brush, M. Owen & M. Kochenderfer (2016): Policy Compression for Air- In: Proc. 35th Digital Avionics Systems Conf. (DASC), pp. 1â10, craft Collision Avoidance Systems. doi:10.1109/DASC.2016.7778091.
[13] G. Katz, C. Barrett, D. Dill, K. Julian & M. Kochenderfer (2017): Reluplex: An Efï¬cient SMT Solver for In: Proc. 29th Int. Conf. on Computer Aided Veriï¬cation (CAV), pp. Verifying Deep Neural Networks. 97â117, doi:10.1007/978-3-319-63387-9 5.
[14] A. Krizhevsky, I. Sutskever & G. Hinton (2012): Imagenet Classiï¬cation with Deep Convolutional Neural Networks. AdvancesinNeuralInformationProcessingSystems, pp. 1097â1105.
[15] A. Kurakin, I. Goodfellow & S. Bengio (2016): Adversarial Examples in the Physical World. Technical Report. http://arxiv.org/abs/1607.02533. | 1709.02802#30 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 31 | Results Table 3 shows the translation results. When SRU is incorporated into the architecture, both the 4-layer and 5-layer model outperform the Transformer base model. For instance, our 5- layer model obtains an average improvement of 0.7 test BLEU score and an improvement of 0.5 BLEU score by comparing the best results of each model achieved across three runs. SRU also ex- hibits more stable performance, with smaller vari- ance over 3 runs. Figure 4 further compares the validation accuracy of different models. These re- sults conï¬rm that SRU is better at sequence mod- eling compared to the original feed-forward net- work (FFN), requiring fewer layers to achieve sim- ilar accuracy. Finally, adding SRU does not affect the parallelization or speed of Transformer â the 4-layer model exhibits 10% speed improvement,
5https://github.com/OpenNMT/ OpenNMT-tf/tree/master/scripts/wmt | 1709.02755#31 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 31 | [16] A. Maas, A. Hannun & A. Ng (2013): Rectiï¬er Nonlinearities improve Neural Network Acoustic Models. In: Proc.30thInt.Conf.onMachineLearning(ICML).
[17] V. Nair & G. Hinton (2010): Rectiï¬ed Linear Units Improve Restricted Boltzmann Machines. In: Proc.27th Int.Conf.onMachineLearning(ICML), pp. 807â814.
[18] L. Pulina & A. Tacchella (2010): An Abstraction-Reï¬nement Approach to Veriï¬cation of Artiï¬cial In: Proc. 22nd Int. Conf. on Computer Aided Veriï¬cation (CAV), pp. 243â257, Neural Networks. doi:10.1007/978-3-642-14295-6 24.
[19] L. Pulina & A. Tacchella (2012): Challenging SMT Solvers to Verify Neural Networks. AICommunications 25(2), pp. 117â135, doi:10.3233/AIC-2012-0525. | 1709.02802#31 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 32 | 5https://github.com/OpenNMT/ OpenNMT-tf/tree/master/scripts/wmt
Model # layers Size Valid BLEU score Test Speed (toks/sec) Hours per epoch Transformer (base) Transformer (+SRU) Transformer (+SRU) 6 4 5 76m 79m 90m 26.6±0.2 (26.9) 26.7±0.1 (26.8) 27.1±0.0 (27.2) 27.6±0.2 (27.9) 27.8±0.1 (28.3) 28.3±0.1 (28.4) 20k 22k 19k 2.0 1.8 2.1
Table 3: EnglishâGerman translation results (Section 4.3). We perform 3 independent runs for each conï¬guration. We select the best epoch based on the valid BLEU score for each run, and report the average results and the standard deviation over 3 runs. In addition, we experiment with averaging model checkpoints and use the averaged version for evaluation, following (Vaswani et al., 2017). We show the best BLEU results achieved in brackets.
# Valid accuracy | 1709.02755#32 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02802 | 32 | [20] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot & S. Dieleman (2016): Mastering the Game of Go with Deep Neural Net- works and Tree Search. Nature529(7587), pp. 484â489, doi:10.1038/nature16961.
[21] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow & R. Fergus (2013): Intriguing Properties of Neural Networks. Technical Report. http://arxiv.org/abs/1312.6199. | 1709.02802#32 | Towards Proving the Adversarial Robustness of Deep Neural Networks | Autonomous vehicles are highly complex systems, required to function reliably
in a wide variety of situations. Manually crafting software controllers for
these vehicles is difficult, but there has been some success in using deep
neural networks generated using machine-learning. However, deep neural networks
are opaque to human engineers, rendering their correctness very difficult to
prove manually; and existing automated techniques, which were not designed to
operate on neural networks, fail to scale to large systems. This paper focuses
on proving the adversarial robustness of deep neural networks, i.e. proving
that small perturbations to a correctly-classified input to the network cannot
cause it to be misclassified. We describe some of our recent and ongoing work
on verifying the adversarial robustness of networks, and discuss some of the
open questions we have encountered and how they might be addressed. | http://arxiv.org/pdf/1709.02802 | Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | cs.LG, cs.CR, cs.LO, stat.ML, D.2.4; I.2.2 | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 19-26 | cs.LG | 20170908 | 20170908 | [] |
1709.02755 | 33 | # Valid accuracy
72% 71% ; sqsnoneooenoee? 70% âO Base model O w/SRU (4 layer) © w/SRU (5 layer) 68% 67% 1 10 20 30 40
Figure 4: Mean validation accuracy (y-axis) of dif- ferent translation models after each training epoch (x-axis).
We compare various recurrent models and use a parameter budget similar to previous methods. In addition, we experiment with the factorization trick (Kuchaiev and Ginsburg, 2017) to reduce the total number of parameters without decreasing the performance. See details in Appendix B.
Results Table 4 presents the results of SRU and other recurrent models. The 8-layer SRU model achieves validation and test bits per char- acter (BPC) of 1.21, outperforming previous best reported results of LSTM, QRNN and recurrent highway networks (RHN). Increasing the layer of SRU to 12 and using a longer context of 256 char- acters in training further improves the BPC to 1.19
while the 5-layer model is only 5% slower com- pared to the base model. We present more results and discussion in Appendix B.3.
# 4.5 Ablation Analysis
# 4.4 Character-level Language Modeling
We perform ablation analyses on SRU by succes- sively disabling different components: | 1709.02755#33 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 34 | # 4.5 Ablation Analysis
# 4.4 Character-level Language Modeling
We perform ablation analyses on SRU by succes- sively disabling different components:
Dataset We use Enwik8, a large dataset for character-level Following standard practice, we use the ï¬rst 90M characters for training and the remaining 10M split evenly for validation and test.
(1) Remove the point-wise multiplication term v © c;_1 in the forget and reset gates. The resulting variant involves less recurrence and has less representational capacity.
Setup Similar to previous work, we use a batch size of 128 and an unroll size of 100 for trun- cated backpropagation during training. We also experiment with an unroll size of 256 and a batch size of 64 such that each training instance has longer context. We use a non-zero highway bias br = â3 that is shown useful for training lan- guage model (Zilly et al., 2017). Previous meth- ods employ different optimizers and learning rate schedulers for training. For simplicity and consis- tency, we use the Adam optimizer and the same learning rate scheduling (i.e., Noam scheduling) as the translation experiments. We train a maxi- mum of 100 epochs (about 700,000 steps).
(2) Disable the scaling correction by setting the constant α = 1. | 1709.02755#34 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 35 | (2) Disable the scaling correction by setting the constant α = 1.
(3) Remove the skip connections.
We train model variants on the classiï¬cation and question answering datasets. Table 5 and Figure 5 conï¬rm the impact of our design decisions â re- moving these components result in worse classiï¬- cation accuracies and exact match scores.
# 5 Discussion
This work presents Simple Recurrent Unit (SRU), a scalable recurrent architecture that operates as fast as feed-forward and convolutional units. We | 1709.02755#35 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 36 | Model Size # layers Unroll size Valid Test Time Best reported results: MI-LSTM (Wu et al., 2016c) HM-LSTM (Chung et al., 2016) LSTM (Melis et al., 2017) RHN (Zilly et al., 2017) FS-LSTM (Mujika et al., 2017) QRNN (Merity et al., 2018) LSTM (Merity et al., 2018) 17m 35m 46m 46m 47m 26m 47m 1 3 4 10 4 4 3 100 100 50 50 100 200 200 - - 1.28 - - - - 1.44 1.32 1.30 1.27 1.25 1.33 1.23 - - - - - - - Our setup: LSTM LSTM QRNN (k=1) SRU SRU SRU (with projection) SRU (with projection) SRU (with projection) 37m 37m 37m 37m 37m 37m 47m 49m 3 6 6 6 10 6 8 12 100 100 100 100 100 100 100 256 1.37 1.35 1.36 1.29 1.26 1.25 1.21 1.19 1.39 1.38 1.38 1.30 1.27 1.26 1.21 1.19 42min 48min 30min 28min 29min 29min 39min 41min | 1709.02755#36 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
1709.02755 | 37 | Table 4: Validation and test BPCs of different recurrent models on Enwik8 dataset. The last column presents the training time per epoch. For SRU with projection, we set the projection dimension to 512.
Model 4layers 6 layers SRU (full) 70.7 714 â remove v © C;_1 70.6 TLA â remove a-scaling 70.3 71.0 â remove highway 69.4 69.1
Table 5: Ablation analysis on SQuAD. Compo- nents are successively removed and the EM scores are averaged over 4 runs.
95.4 95.3 94.8 92.8 92.2 91.2 85.9 85.3 849 CR SUBJ MR Trec
Figure 5: Ablation analysis on the classification datasets. Average validation results are presented. We compare the full SRU implementation (left blue), the variant without v © c;_; multiplication (middle green) and the variant without highway connection (right yellow).
conï¬rm the effectiveness of SRU on multiple nat- ural language tasks ranging from classiï¬cation to translation. We open source our implementation to facilitate future NLP and deep learning research. | 1709.02755#37 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | [
{
"id": "1701.06538"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.