doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1507.02030 | 3 | In this paper we analyze a stochastic version of the Normalized Gradient Descent (NGD) algorithm, which we denote by SNGD. Each iteration of SNGD is as simple and eï¬cient as SGD, but is much more appropriate for non-convex optimization problems, overcoming some of the pitfalls that SGD may encounter. Particularly, we deï¬ne a family of locally-quasi- convex and locally-Lipschitz functions, and prove that SNGD is suitable for optimizing such objectives.
Local-Quasi-convexity is a generalization of unimodal functions to multidimensions, which includes quasi-convex, and convex functions as a subclass. Locally-Quasi-convex functions allow for certain types of plateaus and saddle points which are diï¬cult for SGD and other gradient descent variants. Local-Lipschitzness is a generalization of Lipschitz functions that only assumes Lipschitzness in a small region around the minima, whereas farther away the gradients may be unbounded. Gradient explosion is, thus, another diï¬culty that is success- fully tackled by SNGD and poses diï¬culties for other stochastic gradient descent variants. | 1507.02030#3 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 4 | e We introduce local-quasi-convexity, a property that extends quasi-convexity and cap- tures unimodal functions which are not quasi-convex. We prove that NGD finds an â¬-optimal minimum for such functions within O(1/e?) iterations. As a special case, we show that the above rate can be attained for quasi-convex functions that are Lipschitz in an Q(e)-region around the optimum (gradients may be unbounded outside this re- gion). For objectives that are also smooth in an ((,/e)-region around the optimum, we prove a faster rate of O(1/e).
e We introduce a new setup: stochastic optimization of locally-quasi-convex functions; and show that this setup captures Generalized Linear Models (GLM) regression, Mc- Cullagh and Nelder (1989). For this setup, we devise a stochastic version of NGD (SNGD), and show that it converges within O(1/e?) iterations to an «optimal mini- mum.
The above positive result requires that at each iteration of SNGD, the gradient should be estimated using a minibatch of a minimal size. We provide a negative result showing that if the minibatch size is too small then the algorithm might indeed diverge. | 1507.02030#4 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 5 | We report experimental results supporting our theoretical guarantees and demonstrate an accelerated convergence attained by SNGD.
# 1.1 Related Work
Quasi-convex optimization problems arise in numerous ï¬elds, spanning economics Varian (1985); Laï¬ont and Martimort (2009), industrial organization Wolfstetter (1999) , and com- puter vision Ke and Kanade (2007). It is well known that quasi-convex optimization tasks
2 | 1507.02030#5 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 6 | 2
can be solved by a series of convex feasibility problems Boyd and Vandenberghe (2004); However, generally solving such feasibility problems may be very costly Goffin et al. (1996). There exists a rich literature concerning quasi-convex optimization in the offline case, Polyak (1967); Zabotin et al. (1972); Khabibullin (1977); Sikorski (1986). A pioneering paper by Nesterov (1984), was the first to suggest an efficient algorithm, namely Normalized Gradient Descent, and prove that this algorithm attains eoptimal solution within O(1/e?) iterations given a differentiable quasi-convex objective. This work was later extended by Kiwiel (2001), showing that the same result may be achieved assuming upper semi-continuous quasi-convex objectives. In Konnoy (2003) it was shown how to attain faster rates for quasi-convex optimization, but they assume to know the optimal value of the objective, an assumption that generally does not hold in practice. | 1507.02030#6 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 7 | Among the deep learning community there have been several attempts to tackle gradient- explosion/plateaus. Ideas spanning gradient-clipping Pascanu et al. (2013), smart initializa- tion Doya (1993), and more, Martens and Sutskever (2011), have shown to improve training in practice. Yet, non of these works provides a theoretical analysis showing better conver- gence guarantees.
To the best of our knowledge, there are no previous results on stochastic versions of NGD, neither results regarding locally-quasi-convex/locally-Lipschitz functions. | 1507.02030#7 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 8 | Gradient descent with ï¬xed step sizes, including its stochastic variants, is known to perform poorly when the gradients are too small in a plateau area of the function, or alter- natively when the other extreme happens: gradient explosions. These two phenomena have been reported in certain types of non-convex optimization, such as training of deep networks. Figure 1 depicts a one-dimensional family of functions for which GD behaves provably poorly. With a large step-size, GD will hit the cliï¬s and then oscillate between the two boundaries. Alternatively, with a small step size, the low gradients will cause GD to miss the middle valley which has constant size of 1/2. On the other hand, this exact function is quasi-convex and locally-Lipschitz, and hence the NGD algorithm provably converges to the optimum quickly.
# 2 Deï¬nitions and Notations
We use || - || to denote the Euclidean norm. Ba(x,r) denotes the d dimensional Euclidean ball of radius r, centered around x, and By := Ba(0,1). [N] denotes the set {1,...,N}.
1, . . . , N { | 1507.02030#8 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 9 | 1, . . . , N {
For simplicity, throughout the paper we always assume that functions are diï¬erentiable (but if not stated explicitly, we do not assume any bound on the norm of the gradients).
Definition 2.1. (Local-Lipschitzness and Local-Smoothness) Let z ⬠Râ, G,e > 0. A function f : K+ R is called (G, â¬,z)-Locally-Lipschitz if for every x,y ⬠Ba(z,â¬), we have
â
f(x) â f(y)| < Gilxâ yl . Similarly, the function is (8,â¬,z)-locally-smooth if for every x,y
â
| â¤
â
I/(y) â Fx) ~ (WF(y)-x 9) < Six yl
Ba(z,â¬) we have,
â¬
# yl
3
. }
IIVF(@)l = M+ |V/@)I| =m 0 / \ / /
Figure 1: A quasi-convex Locally-Lipschitz function with plateaus and cliï¬s.
Next we deï¬ne quasi-convex functions: | 1507.02030#9 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 10 | Next we deï¬ne quasi-convex functions:
Definition 2.2. (Quasi-Convexity) We say that a function f : R44 R is quasi-conver if Vx, y ⬠R4, such that f(y) < f(x), it follows that
â
â
â¤
f (x), y x 0 .
â We further say that f is strictly-quasi-convex, if it is quasi-convex and its gradients vanish y : f (y) > minxâRd f (x) only at the global minima, i.e., â
Informally, the above characterization states that the (opposite) gradient of a quasi- convex function directs us in a global descent direction. Following is an equivalent (more common) deï¬nition:
Definition 2.3. (Quasi-Convexity) We say that a function f : R44 R is quasi-conver if any a-sublevel-set of f is conver, t.e., Va ⬠R the set
â â x : f (x)
α(f ) = α is convex. | 1507.02030#10 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 11 | â â x : f (x)
α(f ) = α is convex.
⤠The equivalence between the above deï¬nitions can be found in Boyd and Vandenberghe (2004), for completeness we provide a proof in Appendix A. During this paper we denote the sublevel-set of f at x by
# L
{
}
Sf (x) = y : f (y) . (1)
f (x) }
{
â¤
# 3 Local-Quasi-Convexity
Quasi-convexity does not fully capture the notion of unimodality in several dimension. As an example let x = (x1, x2)
â g(x) = (1 + eâx1)â1 + (1 + eâx2)â1 .
â
(2)
4
It is natural to consider g as unimodal since it acquires no local minima but for the unique global minima at xâ = ( 10, 10). However, g is not quasi-convex: consider the points x = (log 16, log 4, log 16), which belong to the 1.2-sub-level set, their average log 4), y = ( does not belong to the same sub-level-set since g(x/2 + y/2) = 4/3. | 1507.02030#11 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 12 | Quasi-convex functions always enable us to explore, meaning that the gradient always di- rects us in a global descent direction. Intuitively, from an optimization point of view, we only need such a direction whenever we do not exploit, i.e., whenever we are not approximately optimal.
In what follows we deï¬ne local-quasi-convexity, a property that enables us to either explore/exploit. This property 1 captures a wider class of unimodal function (such as g above) rather than mere quasi-convexity. Later we justify this deï¬nition by showing that it captures Generalized Linear Models (GLM) regression, see McCullagh and Nelder (1989); Kalai and Sastry (2009).
Definition 3.1. (Local-Quasi-Convexity) Let x,z ⬠R¢, k,¢ > 0. We say that f : R40 R is (â¬,«,2)-Strictly-Locally-Quasi-Conver (SLQC) in x, if at least one of the following applies:
1. f(x) â fla) <e.
â f (x)
⤠| 1507.02030#12 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 13 | 1. f(x) â fla) <e.
â f (x)
â¤
2. ||V f(x)|| > 0, and for every y ⬠B(z,¢/k) it holds that (Vf(x),y âx) <0.
â Note that if f is G-Lispschitz and strictly-quasi-convex function, then
â
Note that if f is G-Lispschitz and strictly-quasi-convex function, then Vx,z ⬠R¢, Ve > 0, it holds that f is (â¬,G,z)-SLQC in x. Recalling the function g that appears in Equation (2), then it can be shown that Ve ⬠(0, 1], Vx ⬠[-10, 10]? then this function is (e, 1,x*)-SLQC in x, where x* = (â10,â10) (see Appendix B).
â
â
# 3.1 Generalized Linear Models (GLM)
# 3.1.1 The Idealized GLM | 1507.02030#13 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 14 | â
â
# 3.1 Generalized Linear Models (GLM)
# 3.1.1 The Idealized GLM
In this setup we have a collection of m samples {(x;, yi) }71 ⬠Ba x [0,1], and an activation function ¢: R++ R. We are guaranteed to have w* ⬠R@ such that: y; = ¢(w*,x;), Vi ⬠[m] (we denote ¢(w,x) := ¢((w,x))). The performance of a predictor w ⬠R¢, is measured by the average square error over all samples.
1 m ; Fn (w) = â (ys â ow.) - (3) i=l
In Kalai and Sastry (2009) it is shown that the Perceptron problem with γ-margin is a private case of GLM regression.
The sigmoid function Ï(z) = (1 + eâz)â1 is a popular activation function in the ï¬eld of deep learning. The next lemma states that in the idealized GLM problem with sigmoid
1Deï¬nition 3.1 can be generalized in a manner that captures a broader range of scenarions (e.g. the Perceptron problem), we defer this deï¬nition to Appendix H.
5 | 1507.02030#14 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 15 | 5
activation, then the error function is SLQC (but not quasi-convex). As we will see in Section 4 this implies that Algorithm 1 finds an ¢-optimal minima of @7,,(w) within poly(1/e) iterations.
Lemma 3.1. Consider the idealized GLM problem with the sigmoid activation, and assume that ||w*|| < W. Then the error function appearing in Equation (3) is (e,e",w*)-SLQC in w, Ve > 0, Vw ⬠Ba(0,W) (But it is not generally quasi-convex).
||w*|| < W. Then the error function Ve > 0, Vw ⬠Ba(0,W) (But it is We defer the proof to Appendix C
0,
â
# 3.1.2 The Noisy GLM | 1507.02030#15 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 16 | 0,
â
# 3.1.2 The Noisy GLM
In the noisy GLM setup (see McCullagh and Nelder (1989); Kalai and Sastry (2009)), we may draw iid. samples {(x;, y;)}@â¢, ⬠Bg x [0,1], from an unknown distribution D. We assume that there exists a predictor w* ⬠R4 such that E«,,)~p[y|x] = 6(w*, x), where ¢ is an activation function. Given w ⬠R¢ we define its expected error as follows:
â
E(w) = Exy~v(y â o(w,x))â , and it can be shown that w* is a global minima of â¬. We obtain an c-optimal minima to â¬, within poly(1/e) samples m samples from D, their empirical error @,,(w), is defined as
# E
â
and it can be shown that w* is a global minima of â¬. We are interested in schemes that obtain an c-optimal minima to â¬, within poly(1/e) samples and optimization steps. Given m samples from D, their empirical error @,,(w), is defined as in Equation (3). | 1507.02030#16 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 17 | m samples from D, their empirical error @,,(w), is defined as in Equation (3). The following lemma states that in this setup, letting m = Q(1/e?), then GY, is SLQC with high probability. This property will enable us to apply Algorithm 2, to obtain an eoptimal minima to â¬, within poly(1/e) samples from D, and poly(1/e) optimization steps.
# D
# E
# D
Lemma 3.2. Let 6,⬠⬠(0,1). Consider the noisy GLM problem with the sigmoid activation, and assume that ||w*|| < W. Given a fived point w ⬠B(0,W), then w.p.> 1â6, after m> se wyâ log(1/6) samples, the empirical error function appearing in Equation (3) is (e,eâ¢, w*)-SLQC in w. | 1507.02030#17 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 18 | Note that if we had required the SLQC to hold Vw ⬠B(0,W), then we would need the number of samples to depend on the dimension, d, which we would like to avoid. Instead, we require SLQC to hold for a fixed w. This satisfies the conditions of Algorithm 2, enabling us to find an â¬-optimal solution with a sample complexity that is independent of the dimension. We defer the proof of Lemma 3.2 to Appendix D
# 4 NGD for Locally-Quasi-Convex Optimization
Here we present the NGD algorithm, and prove the convergence rate of this algorithm for SLQC objectives. Our analysis is simple, enabling us to extend the convergence rate presented in Nesterov (1984) beyond quasi-convex functions. We then show that quasi- convex and locally-Lipschitz objective are SLQC, implying that NGD converges even if the gradients are unbounded outside a small region around the minima. For quasi-convex and locally-smooth objectives, we show that NGD attains a faster convergence rate.
6
Algorithm 1 Normalized Gradient Descent (NGD)
Input: #Iterations T, x, ⬠R4, learning rate fort=1...T do Update: Kt Xi1 = X_â NG where gy = Vf(x:), Ge = Jol t end for | 1507.02030#18 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 19 | NGD is presented in Algorithm 1. NGD is similar to GD, except we normalize the gradients. It is intuitively clear that to obtain robustness to plateaus (where the gradient can be arbitrarily small) and to exploding gradients (where the gradient can be arbitrarily large), one must ignore the size of the gradient. It is more surprising that the information in the direction of the gradient suï¬ces to guarantee convergence.
Following is the main theorem of this section:
Theorem 4.1. Fir « > 0, let f : R11 R, and x* © argminxepa f(x). Given that f is (â¬,«,x*)-SLQC in everyx ⬠R¢. Then running the NGD algorithm with T > «?\|/x; â x*|/?/e, and n = «/K, we have that: f(Xr) â f(x*) <e.
â
⤠| 1507.02030#19 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 20 | â
â¤
Theorem 4.1 states that (-,-,x*)-SLQC functions admit poly(1/e) convergence rate using NGD. The intuition behind this lies in Definition 3.1, which asserts that at a point x either the (opposite) gradient points out a global optimization direction, or we are already ¢-optimal. Note that the requirement of (¢,-,-)-SLQC in any x is not restrictive, as we have seen in Section 3, there are interesting examples of functions that admit this property Ve ⬠[0,1], and for any x.
For simplicity we have presented NGD for unconstrained problems. Using projections we can easily extend the algorithm and its analysis for constrained optimization over convex sets. This will enable to achieve convergence of O(1/e?) for the objective presented in Equation (2), and the idealized GLM problem presented in Section 3.1.1.
We are now ready to prove Theorem 4.1:
Proof of Theorem 4.1. First note that if the gradient of f vanishes at x,, then by the SLQC assumption we must have that f(x) â f(x*) < «. Assume next that we perform T iterations and the gradient of f at x, never vanishes in these iterations. Consider the update rule of NGD (Algorithm 1), then by standard algebra we get, | 1507.02030#20 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 21 | [eq â x7]? = [xe â x7 |? â 2(Ge, xr â Xx") +9?
f(x*) >. Take y = x* + (â¬/k) G, and observe that assumption implies that (g,y âx:) < 0, and therefore
â [T ] we have f (xt)
Assume that Vt ⬠[T] we have f(x,) â f(x*) >. Take y = x* + (â¬/k) G, and observe that lly âx*|| < â¬/k. The (â¬, «,x*)-SLQC assumption implies that (g,y âx:) < 0, and therefore
âX*)
>e/K.
â
(Gt, x" + (â¬/K) Gt
Ëgt, xt xt 0
â
â
7
Setting 7 = â¬/«, the above implies,
IIXtp1 â x"? < [le â ||? = 2ne/n + 1? = IIx; â 21? â 2/r?.
< [le â ||? = 2ne/n = IIx; â 21? â 2/r?. â f(x*) > ⬠we get
Thus, after T iterations for which f (xt)
x"?
0< [xru â x"? < lx â x"? â Tr? , | 1507.02030#21 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 22 | Thus, after T iterations for which f (xt)
x"?
0< [xru â x"? < lx â x"? â Tr? ,
x*||?/e? .
κ2 x1 Therefore, we must have T
â¤
â
# 4.1 Locally-Lipschitz/Smooth Quasi-Convex Optimization
It can be shown that strict-quasi-convexity and (G, ¢/G, x*)-local-Lipschitzness of f implies that f is (e,G,x*)-SLQC Vx ⬠R%, Ve > 0, and x* ⬠argminyegu f(x) (see Appendix E). Therefore the following is a direct corollary of Theorem 4.1:
Corollary 4.1. Fir ¢ > 0, let f : R¢4 R, and x* ⬠argmin,cga f(x). Given that f is strictly quasi-convex and (G,¢â¬/G,x*)-locally-Lipschitz. Then running the NGD algorithm with T > G?||x; â x*||?/e?, and n = â¬/G, we have that: f (Xr) â f(x*) <e.
â In case f is also locally-smooth, we state an even faster rate: | 1507.02030#22 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 23 | â In case f is also locally-smooth, we state an even faster rate:
Theorem 4.2. Fir « > 0, let f : R44 R, and x* ⬠argminyega f(x). Given that f is strictly quasi-conver and (8, \/2e/B,x*)-locally-smooth. Then running the NGD algorithm with T > B\|x, â x*||?/2e, and n = \/2â¬/B, we have that: f (Rr) â f(x*) <e.
â¥
â
â
â¤
We prove Theorem 4.2 in Appendix F.
Remark 1. The above corollary (resp. theorem) implies that f could have arbitrarily large gradients and second derivatives outside B(x*,¢«/G) (resp. B(x*, \/2e/8)), yet NGD is still ensured to output an â¬-optimal point within G?||x, â x*||?/e? (resp. B||x, â x*||?/2e) itera- tions. We are not familiar with a similar guarantee for GD even in the convex case.
# 5 SNGD for Stochastic SLQC Optimization | 1507.02030#23 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 24 | # 5 SNGD for Stochastic SLQC Optimization
Here we describe the setting of stochastic SLQC optimization. Then we describe our SNGD algorithm which is ensured to yield an e-optimal solution within poly(1/e) queries. We also show that the (noisy) GLM problem, described in Section 3.1.2 is an instance of stochastic SLQC optimization, allowing us to provably solve this problem within poly(1/e) samples and optimization steps using SNGD.
8
Algorithm 2 Stochastic Normalized Gradient Descent (SNGD)
Input: #Iterations T , x1 for t = 1 . . . T do â Rd, learning rate η, minibatch size b Sample: Ïi b, and deï¬ne,
# b i=1 â¼ D
{
}
fix) = 5 ile)
Update:
xt+1 = xt â ηËgt where gt = â ft(xt), Ëgt =
end for Return: ¯xT = arg min{x1,...,xT } ft(xt)
# It Toll t
The stochastic SLQC optimization Setup: Consider the problem of minimizing a R, and assume there exists a distribution over functions function f : Rd
# D
f (x) := EÏâ¼D[Ï(x)] . | 1507.02030#24 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 25 | # D
f (x) := EÏâ¼D[Ï(x)] .
We assume that we may access f by randomly sampling minibatches of size b, and querying Rd, a random mini- the gradients of these minibatches. Thus, upon querying a point xt b is sampled, and we receive i=1 Ïi(x). We batch make the following assumption regarding the minibatch averages:
Assumption 5.1. Let T,¢,6 > 0, x* ⬠arg minxega f(x). There exists k > 0, and a function bo : R26 R, that for b > bo(e,6,T) then w.p.>1â6 and Vt ⬠[T], the minibatch average fi(x) = ty W(x) is (e,K,x*)-SLQC in a,. Moreover, we assume |f,(x)| < M, Vt ⬠[T],xeR?.
â
Note that we assume that bo = poly(1/e, log(T/6)). | 1507.02030#25 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 26 | â
Note that we assume that bo = poly(1/e, log(T/6)).
Justification of Assumption 5.1 Noisy GLM regression (see Section 3.1.2), is an in- teresting instance of stochastic optimization problem where Assumption 5.1 holds. Indeed according to Lemma 3.2, given â¬,6,T > 0, then for b > O(log(T/5) /e?) samplesâ, the average minibatch function is (â¬,«,x*)-SLQC in a, Vt ⬠[T], w._p.>1â0.
# t â
â Local-quasi-convexity of minibatch averages is a plausible assumption when we optimize an expected sum of quasi-convex functions that share common global minima (or when the diï¬erent global minima are close by). As seen from the Examples presented in Equation (2),
In fact, Lemma 3.2 states that for b = Q(log(1/65)/eâ), then the error function is SLQC in a single decision point. Using the union bound we can show that for b = Q(log(Z'/d)/e?) it holds for T decision points
9 | 1507.02030#26 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 27 | 9
and in Sections 3.1.1, 3.1.2, this sum is generally not quasi-convex, but is more often locally- quasi-convex.
Note that in the general case when the objective is a sum of quasi-convex functions, the number of local minima of such objective may grow exponentially with the dimension d, see Auer et al. (1996). This might imply that a general setup where each Ï is quasi-convex may be generally hard.
# 5.1 Main Results
SNGD is presented in Algorithm 2. SNGD is similar to SGD, except we normalize the gradients. The normalization is crucial in order to take advantage of the SLQC assumption, and in order to overcome the hurdles of plateaus and cliï¬s. Following is our main theorem: | 1507.02030#27 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 28 | Theorem 5.1. Fir 6,¢,G,M,« > 0. Suppose we run SNGD with T > k?||x, â x*||?/e iterations, n = â¬/k, and b > max{ #21877) bo(e,6,T)} . Assume that for b > bo(e,6,T) then w.p.> 1â6 and Vt ⬠[T], the function fr, defined in the algorithm is M-bounded, and is also (â¬,K,x*)-SLQC in x. Then, with probability of at least 1 â 26, we have that f (Rr) â f(&*) < 8e.
â
â¤
We prove of Theorem 5.1 at the end of this section.
Remark 2. Since strict-quasi-convexity and (G,¢/G,x*)-local-Lipschitzness are equivalent to SLQC (App. E), the theorem implies that f could have arbitrarily large gradients outside B(x*,¢/G), yet SNGD is still ensured to output an â¬-optimal point within G?||x, â x*||?/e iterations. We are not familiar with a similar guarantee for SGD even in the convex case. | 1507.02030#28 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 29 | Remark 3. Theorem 5.1 requires the minibatch size to be Q(1/e?). In the context of learning, the number of functions, n, corresponds to the number of training examples. By standard sample complexity bounds, n should also be order of 1/e?. Therefore, one may wonder, if the size of the minibatch should be order of n. This is not true, since the required training set size is 1/e? times the VC dimension of the hypothesis class. In many practical cases, the VC dimension is more significant than 1/e?, and therefore n will be much larger than the required minibatch size. The reason our analysis requires a minibatch of size 1/e?, without the VC dimension factor, is because we are just âvalidatingâ and not âlearningâ.
In SGD and for the case of convex functions, even a minibatch of size 1 suffices for guaranteed convergence. In contrast, for SNGD we require a minibatch of size 1/e?. The theorem below shows that the requirement for a large minibatch is not an artifact of our analysis but is truly required. | 1507.02030#29 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 30 | Theorem 5.2. Let ⬠⬠(0,0.1]; There exists a distribution over convex loss functions, such that running SNGD with minibatch size of b = a2 with a high probability, it never reaches an â¬-optimal solution
We prove Theorem 5.2 in Section 5.2.3. The gap between the upper bound of 1/e? and the lower bound of 1/e remains as an open question.
We now provide a sketch for the proof of Theorem 5.1:
10
Proof of Theorem 5.1. Theorem 5.1 is a consequence of the following two lemmas. In the first we show that whenever all f;âs are SLQC, there exists some t such that f:(x:) â fi(x*) < . In the second lemma, we show that for a large enough minibatch size b, then for any t ⬠[T] we have f(x:) < f:(xz) +e, and f(x*) > f:(x*)âe. Combining these two lemmas we conclude that f(Xr) â f(x*) < 3e.
â
⤠| 1507.02030#30 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 31 | â
â¤
Lemma 5.1. Let ¢,6 > 0. Suppose we run SNGD for T > k?|\x; âx*||?/e? iterations, b > bo(e,6,T), and n = e/k. Assume that w.p.> 1â6 all f,âs are (â¬,K,x*)-SLQC in x1, whenever b > bo(e,6,T). Then w.p.> 1-6 we must have some t ⬠[I] for which film.) â fil") Se.
â
â¤
Lemma 5.1 is proved similarly to Theorem 4.1, we defer the proof to Section 5.2.1. The second Lemma relates ft(xt) M 2 log(4T /δ) 2
# ? then w.p.>
â¤
â
filxt) +e,
â¥
â f (xâ)
â
ft(xâ) and also, f (xt)
â¤
â¥
â
Lemma 5.2 is a direct consequence of Hoeï¬dingâs bound (see Section 5.2.2). Using the deï¬nition of ¯xT (Alg. 2) , together with Lemma 5.2 gives: | 1507.02030#31 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 32 | f(r) â f(X*) < fee) â fre") + 26, Vt ⬠[T]
# t â
â
â¤
â
â
Combining the latter with Lemma 5.1, establishes Theorem 5.1.
# 5.2 Remaining Proofs
# 5.2.1 Proof of Lemma 5.1
Proof. First note that if the gradient of f, vanishes at x,, then by the SLQC assumption we must have that fi(x,) â f:(x*) < «. Assume next that we perform T iterations and the gradient of f, at x; never vanishes in these iterations. Consider the update rule of SNGD (Algorithm 2), then by standard algebra we get:
[X41 â X° ||? = |]xe â x°||? â 2n(Ge, xe â X*) +07?
â [T ] we have ft(xt)
fi(x*) > â¬. Take y = x* + (â¬/&) %, and observe that assumption implies that (gj, y â xz) < 0, thus, | 1507.02030#32 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 33 | fi(x*) > â¬. Take y = x* + (â¬/&) %, and observe that assumption implies that (gj, y â xz) < 0, thus,
Assume that Vt ⬠[T] we have f:(x:) â fi(x*) > â¬. Take y = x* + (â¬/&) %, and observe that lly â x*|| < â¬/«. Hence the (â¬, «,x*)-SLQC assumption implies that (gj, y â xz) < 0, thus,
# (gj, y e/K.
â
â
(G1. X" + (â¬/K) Ge â This implies that, if we set 7 = â¬/« then
xâ Ëgt, xt 0 xt
â
â
I[Xe41 â 3° ||? < [fer â x*||? = 2ne/ne + 1? = |x; â x"? â 2 /r?.
11
So, after T iterations for which f,(x,) â f:(x*) > ⬠we get
â xâ
0< [xr â x1 < [pu 72 â TE /e?,
â
Therefore, we must have
2 x 1]2 K?||x, â x T< Klpa = 1" 3 . â¬
# 5.2.2 Proof of Lemma 5.2 | 1507.02030#33 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 34 | Therefore, we must have
2 x 1]2 K?||x, â x T< Klpa = 1" 3 . â¬
# 5.2.2 Proof of Lemma 5.2
Proof. At each step t, the minibatch is being sampled after xt and xâ are ï¬xed. The random variables ft(xt) (resp. ft(xâ)) are an average of b i.i.d. random variables whose expectation is f (xt) (resp. f (xâ)). These random variables are bounded, since we assume M (see Thm. 5.1). Applying Hoeï¬dingâs bound to the b random samples mentioned above, together with the union bound over t [T ], and over both sequences of random variables, the lemma follows.
# 5.2.3 Proof of Theorem 5.2
We will require the following lemma, whose proof is given in App. G.
i Lemma 5.3 (Absorb probabilities). Let } { that 0 is an absorbing state , and the transition distribution elsewhere is as follows:
tiâ1 w.p. p Xeal{Xe =o = fa w.p. 1âp
â X0 = i), then: t > 0 : Xt = 0 |
Deï¬ne the absorb probabilities αi := P (
ay = (â-)', | 1507.02030#34 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 35 | â X0 = i), then: t > 0 : Xt = 0 |
Deï¬ne the absorb probabilities αi := P (
ay = (â-)',
ay = (â-)', Vi>1
Proof. To prove Theorem 5.2, we construct a counter example in one dimension. Consider the following distribution
# D
â0.5â¬x w.p. l-e f(x) = ; (4) (1 â 0.5â¬) max{x + 3, 0} W.p. â¬
0.5â¬) max{x
x + 3, 0 }
â
It can be verified that the optimum of Ep[f(x)] is in x* = â3 , and that the slope of the expected loss in (â3,00) is 0.5¢. Also notice that all points in the segment [â5,â1] are e-optimal.
Suppose we use SNGD with a batchsize of b = o2 i.e., we sample the gradient b times at any query point, and then average the samples and take the sign. Assume that at time t the queried point is greater than x* = â3. Let Y; be the averaged gradient over the
â
12 | 1507.02030#35 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 36 | â
12
batch received at time t, and deï¬ne p = P (Yt non-negative. Then the following is a lower bound on 1 ⥠0), i.e., the probability that this sign is p:
=
l-p:=P(% <0)> (1-0 = (1-6),
1
â
â¥
â
â
where (1 â)° is the probability that all b samples are negative. Now, consider the function G(e) = (1 â )°?/*, It can be shown that G is monotonically decreasing in [0,1], and that G(0.1) > 0.8. Therefore, for any ⬠⬠[0,0.1] we have, p < 0.2.
⤠tâ[T ], be the random variables describing the queries of SNGD under the } distribution over loss functions given in Equation (4). Also assume that we start SNGD with X1 = 0, i.e., at a distance of D = 3 from the optimum. Then the points that SNGD queries iâZ, and the following holds: are on the discrete lattice
{
}
p 1 w.p. w.p. (i 1)η â (i + 1)η Xt = iη Xt+1 = } |{ p
â
# p | 1507.02030#36 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 37 | â
# p
Let to = [â1/n], note that tp is the minimal number of steps required by SNGD to arrive from X, = 0, to an â¬-optimal solution. Now in order to analyze the probability that SNGD ever arrives at an â¬-optimal point, it is sufficient to consider the Markov chain over the lattice {in}icg with the state So = %o7, as an absorbing state. Using Lemma 5.3 we conclude that if we start at X,; = 0 then the probability that we ever absorb is:
14 P(at > 0: X; is eoptimal |Xo = 0) < (; p ) =p
where we used p < 0.2, a bound of G = 1 on the gradients of losses; NGDâs learning rat n=e/G, ande < 0.1. oO
â¤
# 6 Experiments | 1507.02030#37 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 38 | â¤
# 6 Experiments
A better understanding of how to train deep neural networks is one of the greatest chal- lenges in current machine learning and optimization. Since learning NN (Neural Network) architectures essentially requires to solve a hard non-convex program, we have decided to focus our empirical study on this type of tasks. As a test case, we train a Neural Network with a single hidden layer of 100 units over the MNIST data set. We use a ReLU activation function, and minimize the square loss. We employ a regularization over weights with a parameter of λ = 5 | 1507.02030#38 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 39 | At ï¬rst we were interested in comparing the performance of SNGD to MSGD (Minibatch Stochastic Gradient Descent), and to a stochastic variant of Nesterovâs accelerated gradient
13
(a) (b) (c)
Figure 2: Comparison between optimizations schemes. Left: test error. Middle: objective value (on training set). On the Right we compare the objective of SNGD for diï¬erent minibatch sizes.
method Sutskever et al. (2013), which is considered to be state-of-the-art. For MSGD and Nesterovâs method we used a step size rule of the form ηt = η0(1 + γt)â3/4, with η0 = 0.01 and γ = 10â4. For SNGD we used the constant step size of 0.1. In Nesterovâs method we used a momentum of 0.95. The comparison appears in Figures 2(a),2(b). As expected, MSGD converges relatively slowly. Conversely, the performance of SNGD is comparable with Nesterovâs method. All methods employed a minibatch size of 100. | 1507.02030#39 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 40 | Later, we were interested in examining the eï¬ect of minibatch size on the performance of SNGD. We employed SNGD with diï¬erent minibatch sizes. As seen in Figure 2(c), the performance improves signiï¬cantly with the increase of minibatch size.
# 7 Discussion
We have presented the ï¬rst provable gradient-based algorithm for stochastic quasi-convex op- timization. This is a ï¬rst attempt at generalizing the well-developed machinery of stochastic convex optimization to the challenging non-convex problems facing machine learning, and better characterizing the border between NP-hard non-convex optimization and tractable cases such as the ones studied herein.
Amongst the numerous challenging questions that remain, we note that there is a gap between the upper and lower bound of the minibatch size suï¬cient for SNGD to provably converge.
# References
P. Auer, M. Herbster, and M. K. Warmuth. Exponentially many local minima for single neurons. Advances in neural information processing systems, pages 316â322, 1996.
14
60!
Y. Bengio. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2(1):1â127, 2009. | 1507.02030#40 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 41 | 14
60!
Y. Bengio. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2(1):1â127, 2009.
S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
K. Doya. Bifurcations of recurrent neural networks in gradient descent learning. Transactions on neural networks, 1:75â80, 1993. IEEE
J.-L. Goï¬n, Z.-Q. Luo, and Y. Ye. Complexity analysis of an interior cutting plane method for convex feasibility problems. SIAM Journal on Optimization, 6(3):638â652, 1996.
A. T. Kalai and R. Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009.
Q. Ke and T. Kanade. Quasiconvex optimization for robust geometric reconstruction. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(10):1834â1847, 2007.
R. F. Khabibullin. A method to ï¬nd a point of a convex set. Issled. Prik. Mat., 4:15â22, 1977.
K. C. Kiwiel. Convergence and eï¬ciency of subgradient methods for quasiconvex minimiza- tion. Mathematical programming, 90(1):1â25, 2001. | 1507.02030#41 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 42 | I. V. Konnov. On convergence properties of a subgradient method. Optimization Methods and Software, 18(1):53â62, 2003.
J.-J. Laï¬ont and D. Martimort. The theory of incentives: the principal-agent model. Prince- ton university press, 2009.
J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimiza- tion. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1033â1040, 2011.
P. McCullagh and J. Nelder. Generalised linear models. London: Chapman and Hall/CRC, 1989.
Y. E. Nesterov. Minimization methods for nonsmooth convex and quasiconvex functions. Matekon, 29:519â531, 1984.
R. Pascanu, T. Mikolov, and Y. Bengio. On the diï¬culty of training recurrent neural In Proceedings of The 30th International Conference on Machine Learning, networks. pages 1310â1318, 2013.
B. T. Polyak. A general method of solving extremum problems. Dokl. Akademii Nauk SSSR, 174(1):33, 1967. | 1507.02030#42 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 43 | B. T. Polyak. A general method of solving extremum problems. Dokl. Akademii Nauk SSSR, 174(1):33, 1967.
J. Sikorski. Quasi subgradient algorithms for calculating surrogate constraints. In Analysis and algorithms of optimization problems, pages 203â236. Springer, 1986.
15
I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1139â1147, 2013.
H. R. Varian. Price discrimination and social welfare. The American Economic Review, pages 870â875, 1985.
E. Wolfstetter. Topics in microeconomics: Industrial organization, auctions, and incentives. Cambridge University Press, 1999.
Y. I. Zabotin, A. Korablev, and R. F. Khabibullin. The minimization of quasicomplex functionals. Izv. Vyssh. Uch. Zaved. Mat., (10):27â33, 1972.
16
# A Equivalence Between Deï¬nitions 2.2 and 2.3
First let us show that 2.2 2.3 | 1507.02030#43 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 44 | 16
# A Equivalence Between Deï¬nitions 2.2 and 2.3
First let us show that 2.2 2.3
Proof of 2.2 = 2.3. Let x,y ⬠R¢ such that f(x), f(y) < A. Let [x,y] be the line segment connecting these points; we need to show that Vz ⬠[x,y]; f(z) < A. Assume by contradic- tion that there exists z ⬠[x,y] such that f(z) > A. Assume w.l.o.g. that (Vf(z),xây) £0 (otherwise we could always find zâ ⬠[x,y] such that f(zâ) = f(z) and (Vf(zâ),x-ây) £0), and let a ⬠(0,1) such that z = ax + (1â a)y. By Definition 2.2 the following applies:
â
â | 1507.02030#44 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 45 | â
â
0 0 (Vf(@),2â x) = (Vf (z),ax + (Lâ a)y â x) = (1âa)(Vf(Z),yÂ¥ â x) (Vf(2),2ây) = (VF (2), 0x + (1â a)y â y) = âa(Vf(z),y â x) . < <
â x) = â y) = 0 and also
0 (Vf(2),2ây) = (VF Since a ⬠(0,1), we conclude that a contradiction since we assumed <
â â x â x â
x) . x)
Since a ⬠(0,1), we conclude that (V f(z), y â x) > 0 and also (V f(z), y â x) <0. This is a contradiction since we assumed (V f(z), y â x) #0.
# (V
#0.
Let us now show that 2.3 2.2
â | 1507.02030#45 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 46 | # (V
#0.
Let us now show that 2.3 2.2
â
Proof of 2.3 = 2.2. Consider the 1 dimensional function h(a) = f(x + a(y â x)). The derivative of h at 0 is hâ(0) = (Vf(x),y â x). Therefore, we need to show that if y ⬠S;(x) then h'(0) < 0. By the quasi-convex assumption we have that all the line segment connecting x to y is in Ss(x). Therefore, for every a ⬠[0,1] we have h(a) < h(0). This means that
â
â¤
n(0) = tim MOâ PO) a>+0 Qa <0.
# B Local Quasi-convexity of g
Here we show that the function g that appears in Equation 2 is SLQC. Denote xâ = xâ ( In â 0 (we denote order to prove SLQC it suï¬cient to show that g(x)). Deriving g at x we have: gx :=
â
g(x) = (eâx1/(1 + eâx1)2, eâx2/(1 + eâx2)2) . gx =
â > 0, | 1507.02030#46 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 47 | â > 0,
and it is clear that ||gx|| > 0, Vx ⬠[-10,10]?, thus strictness always holds. We divide the proof of (gx,x â y) > 0, into cases:
>
â
Case 1: Suppose that 2; < 0,22 < 0. In this case it is possible to show that the Hessian of g is positive-semi-definite, thus g is conver in [â10,0]?. Since g is also 1-Lipschitz, then it implies that it is (e,1,x*)-SLQC in every x ⬠[â10, 0]?.
17
,
Case 2: Suppose that at least one of x1, x2 is positive, w.l.o.g. assume that x1 > 0. In this case:
2 e(; + 10+ (07 â i) (gx; xâ y) )2 a (1+ e-â¢)2 2 e7 +10-e) > i ee ee ~ iar i + e7 i)? 19eâ10 ee 10+ â (+e)? (felt)? 10 e Ap enim (19 ~ eeâ) | 1507.02030#47 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 48 | where in the second line we used ||y â x*|| < ¢. In the third line we used ⬠⬠[0,1], also e~*(z+10âe) > â10+e Grey entree . The fourth line uses 10 = arg maxz¢(o,10] woe and minze{â 10,0) e+e > e- 10 and the last line uses ⬠< 1.
â¥
â¤
The above two cases establish that g is (e, 1,x*)-SLQC in every x ⬠[â10, 10]â, ⬠⬠(0, 1].
â
â
# C Proof of Lemma 3.1
Proof. Given ⬠> 0, we will show that &r,, is (e,e", w*)-SLQC at every w ⬠B(0,W). Recall o(z) = (1+e7-*)~1, and consider ||w|]| < W such that @,,(w) = +0", (yiâ¢(w, x:))? > â¬. Also let v be a point â¬/e⢠close to the minima w*, we therefore have: | 1507.02030#48 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 49 | (V&im(w), w â v) ex) =F caw O18) â (C7 38) = (v.38) elwxi) p) m = m » (l 1 e(wxi))2 (o(w, Xi) ~~ ow", xi))((w, xi) ~~ (w*, Xi) + (w* â%v; Xi)) 4de\w>xi) ee-W > aware Hwa) â ow" 26)? = S >0.
In the second line we used y; = ¢(w*, x;), which holds for the idealized setup. In the third line we used the fact that ¢(z) is monotonically increasing and 1/4-Lipschitz, and therefore ((2) â 6(2")) (2 â 2) > 4 (62) â o(2"))?. We also used |(w" â v ox) < w= wll < 1/4. The fourth line -w zi Wi tee aay ee, and |o(w, x;) â ¢(w*,x;)| < 1; Finally we used max, mer
â | â¤
â (1+ez)2 â¤
# Ï(w, xi) |
â
18
â
(5)
# 4ez | 1507.02030#49 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 50 | â | â¤
â (1+ez)2 â¤
# Ï(w, xi) |
â
18
â
(5)
# 4ez
. 2 _ . . . uses minyz|<w rere >e-". The last line follows since we assume GT,,(w) > ¢â¬. The strictness is immediate since (V&t,,(w), wâv) > 0, therefore, the above establishes SLQC.
(V&t,,(w),
â
We will now show that érr,, is generally not quasi-convex: Consider the idealized setup with two samples (x1, y1), (x2, y2) where x; = (0, âlog4),x. = (âlog4,0) and y, = ya = 1/5, The error function is therefore:
~ 1/1 1 * 1/1 1 ° mW) = 915 - peewee) bol57 Tew) . | 1507.02030#50 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 51 | ~ 1/1 1 * 1/1 1 ° mW) = 915 - peewee) bol57 Tew) .
and it can be verified that the optimal predictor is w* = (1,1), yielding @,,(w*) = 0. Now let wi = (3,1), we = (1,3), it can be shown that @Y,(w1) = GÂ¥n(we) < 0.018, yet @Â¥m(wi/2 + wo/2) > 0.019. Thus @7,,, is not quasi-convex.
â¥
# D Proof of Lemma 3.2
Proof. Since we are in the noisy idealized setup, and i, yi [0, 1] the folllowing holds
â + ξi .
â
yi = O(w*,x:) +& {â¬}â i21 are zero mean, independent and bound random variables, &f,, be written follows:
where {â¬}â i21 are zero mean, independent and bound random variables, Vi ⬠[mJ], |&| < 1. Therefore &f,, can be written as follows:
m Aa(w) = â Y(y.â ow. 26) i=1 = de (w*,x;) â@ (w, X;)) 7 Se) | 1507.02030#51 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 53 | (V@im(w), w â v) 2 Colman) = m dX (1 + elw.xi))2 o(w, Xi) ~~ yi) ((w, Xi) ~~ (v,x:)) 22 elmo) =F dX (1+ ewe o(w, Xi) â OCW", xi) â &)((W, Xi) â (W*, xi) + (W* â V,x:)) 2C delwri) eg Co WV 1 2 my pew (Ww, xi) â O(W", Xi)" â ââ + m 2 Sw) > By A [(olw.x) ~ ob x)? + 60,00)) â + AGA) â_ Ty a plwxi) W, Xi) â PW , Xi iVi WwW) | â â_ ii W ~ m & (1+ elwre))? , 2 mS $ > 26 (Fa(w) â Bal") â "+ LGA lw) a a m m D) m a 1 ONT > 3 WwW + 1 > &A;(w) : ~ 2 m
i=1 | 1507.02030#53 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 54 | i=1
(wx;) < (wx) Grew ((W", Xi) â (Ww, x;)), and Ay(w) = Ai(w) = GE was ye i(W). The argumentation justifying the above inequalities is the same as is done for Equation (5) (see Appendix C). According to Equation (6), the lemma is established if we can show that where we denote A;(w) =
i So GAi(w) > âee WV m i=l
The {â¬,}â¢, are zero mean and independent, and |â¬;\;(w)| < 4(W +1), thus applying Heoffdingâs bound we get that the above does hold for m > se wyâ log(1/6). Note that in bounding |â¬;\;(w)|, we used |g;| < 1, also w, w* ⬠B(0, W), and max, â& AIS (1+e7)?_ =
Ëλi(w) , we used |
# ξi |
# ξi |
(1+ez)2 â¤
| â¤
â
# E Locally-Lipschitz and Strictly Quasi-Convex are SLQC | 1507.02030#54 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 55 | # ξi |
(1+ez)2 â¤
| â¤
â
# E Locally-Lipschitz and Strictly Quasi-Convex are SLQC
In order to show that strictly quasi-convex function which is also (G, ¢/G, x*)-Lipschitz, is SLQC, we require the following lemma:
Lemma E.1. Let z ⬠R*, and assume that f is (G,¢/G,z)-Locally-Lipschitz. Then, for every x with f(x) â f(z) > ⬠we have B(z,e/G) C Ss(x)
Proof. Recall the notation Sy(x) = B(z,¢/G) we have f(y) < f(z) +. we obtain that y ⬠Sp(x).
â
Proof. Recall the notation Sy(x) = {y : f(y) < f(x)}. By Lipschitzness, for every y ⬠B(z,¢/G) we have f(y) < f(z) +. Combining with the assumption that f(z) + « < f(x) we obtain that y ⬠Sp(x).
â
20
(6) | 1507.02030#55 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 56 | â
20
(6)
Therefore, if f(x) â f(x*) > e, then Vy ⬠B(x*, â¬/G) it holds that f(x) â f(y) > 0, and since f is strictly quasi-convex, the latter means that (V f(x), y âx) < 0, and ||Vf(x)]|| > 0. Thus (â¬, G,x*)-SLQC is established.
# F Proof of Theorem 4.2
The key lemma, that enables us to attain faster rates for smooth functions is the following: Lemma F.1. Let x* be a global minima of f. Also assume that f is (6, \/2¢/8,x*)-locally- smooth. Then, for every x with f(x) â f(x*) > ⬠we have B(x*, \/2e/8) C Sp(x).
â â Proof. Combining the deï¬nition of local-smoothness (Def. 2.1) together with get
V f(x*) =
f(y) â FO) < Sllyâ x", Vy ⬠Bx", V'2¢/8)
â FO) < Sllyâ x", Vy B(x*, \/2â¬/3) we have f(y) | 1507.02030#56 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 57 | â FO) < Sllyâ x", Vy B(x*, \/2â¬/3) we have f(y)
Therefore, for every y ⬠B(x*, \/2â¬/3) we have f(y) < f(x*) +. Combining with the assumption that f(x*) + ⬠< f(x) we obtain that y ⬠S(x).
â
The proof of Theorem 4.2 follows the same lines as the proof of Theorem 4.1. The main difference is that whenever f(x;) â f(x*) > â¬, we use Lemma F.1 and quasi-convexity to show that for y = x* + \/2e/8q it follows that
f (xt), y xt 0 .
â
We therefore omit the details of the proof.
# G Proof of Lemma 5.3
Proof. Using the stationarity and Markov property of the chain, we can write the following recursive equations for the absorb probabilities:
i > 1
αi = (1 α1 = (1
p)αi+1 + pαiâ1, p)α2 + p
â â
â
a1 = (1â p)ao +p (8) | 1507.02030#57 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 58 | p)αi+1 + pαiâ1, p)α2 + p
â â
â
a1 = (1â p)ao +p (8)
Lets guess a solution of the form, αi = c0Ïi, where Ï is the decay parameter of the absorb probabilities. By inserting this solution into equation (7) we get an equation for Ï:
(1 p)Ï2 Ï + p = 0 .
â
â
And it can be validated that the only nontrivial solution is Ï = p equation (8) we get c0 = 1, and therefore we conclude that: 1âp , using the latter Ï in
a =(ââ)',
a =(ââ)', Wid]
21
# 0 we
(7) (8)
# H A Broader Notion of Local-Quasi-Convexity
Deï¬nition 3.1 describes a rich family of function, as depicted in Section 3.1.1, and 3.1.2. However, it is clear that it does not capture piecewise constant and quasi-convex functions, such as the zero-one loss, or the Perceptron problem. | 1507.02030#58 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 59 | In some cases, e.g. the Perceptron problem, we may have an access to a direction oracle, G : R4++ R¢. This oracle is a proxy to the gradient, aiming us in a global ascent (descent) direction. Following is a broader definition of locally-quasi-convex functions:
Definition H.1. (Local-Quasi-Convezity) Let x,z ⬠R¢, k,¢ > 0. Also letG : R44 R¢. We say that f : R44 R is (e, K, z)-Strictly-Locally-Quasi-Convex (SLQC) in x, with respect to the direction oracle G, if at least one of the following applies:
f (z) 1. f (x)
\|G(x)||
â¤
2. \|G(x)|| > 0, and for every y ⬠B(z,â¬/k) it holds that (G(x),y âx) <0.
â
â | 1507.02030#59 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 60 | â
â
Thus, Deï¬nition 3.1, is a private case of the above, which takes the gradient of f to be the direction oracle. Note that we can show that NGD/SNGD and their guarantees still hold for SLQC functions with a direction oracle. The algorithms and proofs are very similar to the ones that appear in the paper, and we therefore omit the details.
In the following section we illustrate a scenario that ï¬ts the above deï¬nition.
# H.1 The γ-margin Perceptron
In this setup we have a collection of m samples {(x;,y:)}ââ¢, ⬠Ba x {0,1} guaranteed to have w* ⬠R? such that: y;(w*,x,;) > 7, Vi ⬠[m]. Thus, using (w*,x;) as a predictor, it classifies all the points correctly (with a margin of 7). be the loss ¢(z) the of
# ,and we the sign of
are
Letting Ï be the zero-one loss Ï(z) = 11zâ¥0, we measure the performance of a predictor Rd, by the average (square) error over all samples,
â
_ 1 m 3 Tm = â_ a Ai . 9 Fal) = 7D (os â 9b) (9) | 1507.02030#60 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 61 | â
_ 1 m 3 Tm = â_ a Ai . 9 Fal) = 7D (os â 9b) (9)
Clearly, the gradients of &T,,(w) vanish almost everywhere. Luckily, from the convergence analysis of the Perceptron (see e.g. Kalai and Sastry (2009)), we know the following to be a direction oracle for &»,(w):
m G(w) = â > (O(w, Xi) â Yi) % - 10) i=l
The next lemma states that in the above setup, the error function is SLQC with respect to G. This implies that Algorithm 1 finds an e-optimal minima of &7,,(w) within poly(1/e) iterations.
22
Lemma H.1. Consider the y-margin Perceptron problem. Then the error function appearing in Equation (9) is (â¬, 2/7, w*)-SLQC in w, Ve ⬠(0,1), Vw ⬠R? with respect to the direction oracle appearing in Equation (10). | 1507.02030#61 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 62 | Proof. Given « ⬠(0,1), we will show that @?,, is (â¬,2/7,w*)-SLQC at every w ⬠Râ. Consider w ⬠R¢ such that @n(w) = 4 7", (yi â o(w,x;))? > â¬. Also let v be a point ye/2 close to the minima w*, we therefore have:
(G(w),wâv â (ow, xi) â yi) ((W, Xi) â (V, Xi) ll S| Il an ((w, xi) â @(w", xi) ((w, xi) â (w", xi) + (W" â Vv, xi) S| Me: nN i=1 > » (o(w, x:) â ow", x:))? ~ 9¢/2 > ye â ye/2 >0. | 1507.02030#62 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1507.02030 | 63 | In the second line we used y; = 6(w*,x;), which holds by our assumption on w*. In the fourth line we used the fact that (¢(w, x;) â¢(w*, x;))((w, x;) â (w*, x;)) > (d(w, x;) â¢(w*, x,))? which holds since w* is a minimizer with a y-margin. We also used &,(w) < 1, anc |(w* â v,x;)| < ||w* â v|| - ||x;|| < ye/2. Lastly, we use @7,,(w) > e. The strictness is immediate since (G(w),w â v) > 0, therefore, the above establishes
with a y-margin. We ye/2. Lastly, we use @7,,(w) (G(w),w â v) > 0,
The strictness is immediate since (G(w),w â v) > 0, therefore, the above establishes SLQC.
23
(11) | 1507.02030#63 | Beyond Convexity: Stochastic Quasi-Convex Optimization | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size. | http://arxiv.org/pdf/1507.02030 | Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | cs.LG, math.OC | null | null | cs.LG | 20150708 | 20151028 | [] |
1506.08909 | 0 | 6 1 0 2
b e F 4 ] L C . s c [
3 v 9 0 9 8 0 . 6 0 5 1 : v i X r a
# The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Ryan Loweâ*, Nissan Pow*, Iulian V. Serbanâ and Joelle Pineau*
*School of Computer Science, McGill University, Montreal, Canada â Department of Computer Science and Operations Research, Universié de Montréal, Montreal, Canada
# Abstract
This paper introduces the Ubuntu Dia- logue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a to- tal of over 7 million utterances and 100 million words. This provides a unique re- source for research into building dialogue managers based on neural language mod- els that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of in- teractions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyz- ing this dataset, and provide benchmark performance on the task of selecting the best next response.
# Introduction | 1506.08909#0 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 1 | # Introduction
The ability for a computer to converse in a nat- ural and coherent manner with a human has long been held as one of the primary objectives of artiï¬- cial intelligence (AI). In this paper we consider the problem of building dialogue agents that have the ability to interact in one-on-one multi-turn con- versations on a diverse set of topics. We primar- ily target unstructured dialogues, where there is no a priori logical representation for the informa- tion exchanged during the conversation. This is in contrast to recent systems which focus on struc- tured dialogue tasks, using a slot-ï¬lling represen- tation [10, 27, 32]. | 1506.08909#1 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 2 | methods, more speciï¬cally with neural architec- tures [1]; however, it is worth noting that many of the most successful approaches, in particular convolutional and recurrent neural networks, were known for many years prior. It is therefore rea- sonable to attribute this progress to three major factors: 1) the public distribution of very large rich datasets [5], 2) the availability of substantial computing power, and 3) the development of new training methods for neural architectures, in par- ticular leveraging unlabeled data. Similar progress has not yet been observed in the development of dialogue systems. We hypothesize that this is due to the lack of sufï¬ciently large datasets, and aim to overcome this barrier by providing a new large corpus for research in multi-turn conversation. | 1506.08909#2 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 3 | The new Ubuntu Dialogue Corpus consists of almost one million two-person conversations ex- tracted from the Ubuntu chat logs1, used to receive technical support for various Ubuntu-related prob- lems. The conversations have an average of 8 turns each, with a minimum of 3 turns. All conversa- tions are carried out in text form (not audio). The dataset is orders of magnitude larger than struc- tured corpuses such as those of the Dialogue State Tracking Challenge [32]. It is on the same scale as recent datasets for solving problems such as ques- tion answering and analysis of microblog services, such as Twitter [22, 25, 28, 33], but each conversa- tion in our dataset includes several more turns, as well as longer utterances. Furthermore, because it targets a speciï¬c domain, namely technical sup- port, it can be used as a case study for the devel- opment of AI agents in targeted applications, in contrast to chatbox agents that often lack a well- deï¬ned goal [26].
We observe that in several subï¬elds of AIâ computer vision, speech recognition, machine translationâfundamental break-throughs were achieved in recent years using machine learning
In addition to the corpus, we present learning architectures suitable for analyzing this dataset, ranging from the simple frequency-inverse docuâThe ï¬rst two authors contributed equally. | 1506.08909#3 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 4 | In addition to the corpus, we present learning architectures suitable for analyzing this dataset, ranging from the simple frequency-inverse docuâThe ï¬rst two authors contributed equally.
1These logs are available from 2004 to 2015 at http: //irclogs.ubuntu.com/
ment frequency (TF-IDF) approach, to more so- phisticated neural models including a Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM) architecture. We provide bench- trained mark performance of these algorithms, with our new corpus, on the task of selecting the best next response, which can be achieved with- out requiring any human labeling. The dataset is ready for public release2. The code developed for the empirical results is also available3.
# 2 Related Work
We brieï¬y review existing dialogue datasets, and some of the more recent learning architectures used for both structured and unstructured dia- logues. This is by no means an exhaustive list (due to space constraints), but surveys resources most related to our contribution. A list of datasets discussed is provided in Table 1.
# 2.1 Dialogue Datasets | 1506.08909#4 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 5 | # 2.1 Dialogue Datasets
The Switchboard dataset [8], and the Dialogue State Tracking Challenge (DSTC) datasets [32] have been used to train and validate dialogue man- agement systems for interactive information re- trieval. The problem is typically formalized as a slot ï¬lling task, where agents attempt to predict the goal of a user during the conversation. These datasets have been signiï¬cant resources for struc- tured dialogues, and have allowed major progress in this ï¬eld, though they are quite small compared to datasets currently used for training neural archi- tectures.
Recently, a few datasets have been used con- taining unstructured dialogues extracted from Twitter4. Ritter et al. [21] collected 1.3 million conversations; this was extended in [28] to take ad- vantage of longer contexts by using A-B-A triples. Shang et al. [25] used data from a similar Chinese website called Weibo5. However to our knowl- edge, these datasets have not been made public, and furthermore, the post-reply format of such mi- croblogging services is perhaps not as represen- tative of natural dialogue between humans as the continuous stream of messages in a chat room. In | 1506.08909#5 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 6 | is now https://github.com/rkadlec/ available: ubuntu-ranking-dataset-creator. This ver- sion makes some adjustments and ï¬xes some bugs from the ï¬rst version.
3http://github.com/npow/ubottu 4https://twitter.com/ 5http://www.weibo.com/
fact, Ritter et al. estimate that only 37% of posts on Twitter are âconversational in natureâ, and 69% of their collected data contained exchanges of only length 2 [21]. We hypothesize that chat-room style messaging is more closely correlated to human-to- human dialogue than micro-blogging websites, or forum-based sites such as Reddit.
Part of the Ubuntu chat logs have previously been aggregated into a dataset, called the Ubuntu Chat Corpus [30]. However that resource pre- serves the multi-participant structure and thus is less amenable to the investigation of more tradi- tional two-party conversations.
Also weakly related to our contribution is the problem of question-answer systems. Several datasets of question-answer pairs are available [3], however these interactions are much shorter than what we seek to study.
# 2.2 Learning Architectures | 1506.08909#6 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 7 | # 2.2 Learning Architectures
Most dialogue research has historically focused on structured slot-ï¬lling tasks [24]. Various ap- proaches were proposed, yet few attempts lever- age more recent developments in neural learning architectures. A notable exception is the work of Henderson et al. [11], which proposes an RNN structure, initialized with a denoising autoencoder, to tackle the DSTC 3 domain. | 1506.08909#7 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 8 | Work on unstructured dialogues, recently pi- oneered by Ritter et al. [22], proposed a re- sponse generation model for Twitter data based on ideas from Statistical Machine Translation. This is shown to give superior performance to previ- ous information retrieval (e.g. nearest neighbour) approaches [14]. This idea was further devel- oped by Sordoni et al. [28] to exploit information from a longer context, using a structure similar to the Recurrent Neural Network Encoder-Decoder model [4]. This achieves rather poor performance on A-B-A Twitter triples when measured by the BLEU score (a standard for machine translation), yet performs comparatively better than the model of Ritter et al. [22]. Their results are also veriï¬ed with a human-subject study. A similar encoder- decoder framework is presented in [25]. This model uses one RNN to transform the input to some vector representation, and another RNN to âdecodeâ this representation to a response by gen- erating one word at a time. This model is also eval- uated in a human-subject study, although much smaller in size than in [28]. Overall, these models | 1506.08909#8 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 9 | Dataset Type Task # Dialogues # Utterances # Words Description Switchboard [8] DSTC1 [32] DSTC2 [10] DSTC3 [9] DSTC4[13] Twitter Corpus [21] Twitter Triple Corpus [28] Sina Weibo [25] Ubuntu Dialogue Corpus Human-human spoken Human-computer spoken Human-computer spoken Human-computer spoken Human-human spoken Human-human micro-blog Human-human micro-blog Human-human micro-blog Human-human chat Various State tracking State tracking State tracking State tracking Next utterance generation Next utterance generation Next utterance generation Next utterance classiï¬cation 2,400 15,000 3,000 2,265 35 1,300,000 29,000,000 4,435,959 930,000 â 210,000 24,000 15,000 â 3,000,000 87,000,000 8,871,918 7,100,000 3,000,000 â â â â â â 100,000,000 Telephone conversations on pre-speciï¬ed topics Bus ride information system Restaurant booking system Tourist information system 21 hours of tourist info exchange over Skype Post/ replies extracted from Twitter A-B-A triples from Twitter replies Post/ reply pairs extracted from Weibo Extracted from Ubuntu Chat Logs | 1506.08909#9 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 10 | Table 1: A selection of structured and unstructured large-scale datasets applicable to dialogue systems. Faded datasets are not publicly available. The last entry is our contribution.
highlight the potential of neural learning architec- tures for interactive systems, yet so far they have been limited to very short conversations.
# 3 The Ubuntu Dialogue Corpus
We seek a large dataset for research in dialogue systems with the following properties:
⢠Two-way (or dyadic) conversation, as op- posed to multi-participant chat, preferably human-human.
⢠Large number of conversations; 105 â 106 is typical of datasets used for neural-network learning in other areas of AI.
⢠Many conversations with several turns (more than 3).
⢠Task-speciï¬c domain, as opposed to chatbot systems.
All of these requirements are satisï¬ed by the Ubuntu Dialogue Corpus presented in this paper.
# 3.1 Ubuntu Chat Logs
The Ubuntu Chat Logs refer to a collection of logs from Ubuntu-related chat rooms on the Freenode Internet Relay Chat (IRC) network. This protocol allows for real-time chat between a large number of participants. Each chat room, or channel, has a particular topic, and every channel participant can see all the messages posted in a given chan- nel. Many of these channels are used for obtaining technical support with various Ubuntu issues. | 1506.08909#10 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 11 | a potential solution, after ï¬rst addressing the âuser- nameâ of the ï¬rst user. This is called a name men- tion [29], and is done to avoid confusion in the channel â at any given time during the day, there can be between 1 and 20 simultaneous conversa- tions happening in some channels. In the most popular channels, there is almost never a time when only one conversation is occurring; this ren- ders it particularly problematic to extract dyadic dialogues. A conversation between a pair of users generally stops when the problem has been solved, though some users occasionally continue to dis- cuss a topic not related to Ubuntu. | 1506.08909#11 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 12 | Despite the nature of the chat room being a con- stant stream of messages from multiple users, it is through the fairly rigid structure in the messages that we can extract the dialogues between users. Figure 4 shows an example chat room conversa- tion from the #ubuntu channel as well as the ex- tracted dialogues, which illustrates how users usu- ally state the username of the intended message recipient before writing their reply (we refer to all replies and initial questions as âutterancesâ). For example, it is clear that users âTaruâ and âkujaâ are engaged in a dialogue, as are users âOldâ and âbur[n]erâ, while user â_pmâ is asking an initial question, and âLiveCDâ is perhaps elaborating on a previous comment.
# 3.2 Dataset Creation
As the contents of each channel are moderated, most interactions follow a similar pattern. A new user joins the channel, and asks a general ques- tion about a problem they are having with Ubuntu. Then, another more experienced user replies with | 1506.08909#12 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 13 | In order to create the Ubuntu Dialogue Corpus, ï¬rst a method had to be devised to extract dyadic dialogues from the chat room multi-party conver- sations. The ï¬rst step was to separate every mes- sage into 4-tuples of (time, sender, recipient, utter- ance). Given these 4-tuples, it is straightforward to
group all tuples where there is a matching sender and recipient. Although it is easy to separate the time and the sender from the rest, ï¬nding the in- tended recipient of the message is not always triv- ial. | 1506.08909#13 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 14 | 3.2.1 Recipient Identiï¬cation While in most cases the recipient is the ï¬rst word of the utterance, it is sometimes located at the end, or not at all in the case of initial questions. Fur- thermore, some users choose names correspond- ing to common English words, such as âtheâ or âstopâ, which could lead to many false positives. In order to solve this issue, we create a dictionary of usernames from the current and previous days, and compare the ï¬rst word of each utterance to its If a match is found, and the word does entries. not correspond to a very common English word6, it is assumed that this user was the intended recip- ient of the message. If no matches are found, it is assumed that the message was an initial question, and the recipient value is left empty. | 1506.08909#14 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 15 | 3.2.2 Utterance Creation The dialogue extraction algorithm works back- wards from the ï¬rst response to ï¬nd the initial question that was replied to, within a time frame of 3 minutes. A ï¬rst response is identiï¬ed by the presence of a recipient name (someone from the recent conversation history). The initial question is identiï¬ed to be the most recent utterance by the recipient identiï¬ed in the ï¬rst response.
All utterances that do not qualify as a ï¬rst re- sponse or an initial question are discarded; initial questions that do not generate any response are also discarded. We additionally discard conversa- tions longer than ï¬ve utterances where one user says more than 80% of the utterances, as these are typically not representative of real chat dialogues. Finally, we consider only extracted dialogues that consist of 3 turns or more to encourage the model- ing of longer-term dependencies. | 1506.08909#15 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 16 | To alleviate the problem of âholesâ in the dia- logue, where one user does not address the other explicitly, as in Figure 5, we check whether each user talks to someone else for the duration of their conversation. If not, all non-addressed utterances are added to the dialogue. An example conversa- tion along with the extracted dialogues is shown in Figure 5. Note that we also concatenate all con- secutive utterances from a given user.
6We use the GNU Aspell spell checking dictionary.
10° 108 10° Number of dialogues, log scale 10? 10) 10? Number of turns per dialogue, log scale
Figure 1: Plot of number of conversations with a given number of turns. Both axes use a log scale.
# dialogues (human-human) # utterances (in total) # words (in total) Min. # turns per dialogue Avg. # turns per dialogue Avg. # words per utterance Median conversation length (min) 930,000 7,100,000 100,000,000 3 7.71 10.34 6
Table 2: Properties of Ubuntu Dialogue Corpus.
We do not apply any further pre-processing (e.g. tokenization, stemming) to the data as released in the Ubuntu Dialogue Corpus. However the use of pre-processing is standard for most NLP systems, and was also used in our analysis (see Section 4.) | 1506.08909#16 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 17 | # 3.2.3 Special Cases and Limitations
It is often the case that a user will post an ini- tial question, and multiple people will respond to it with different answers. In this instance, each conversation between the ï¬rst user and the user who replied is treated as a separate dialogue. This has the unfortunate side-effect of having the ini- tial question appear multiple times in several dia- logues. However the number of such cases is suf- ï¬ciently small compared to the size of the dataset. Another issue to note is that the utterance post- ing time is not considered for segmenting conver- sations between two users. Even if two users have a conversation that spans multiple hours, or even days, this is treated as a single dialogue. However, such dialogues are rare. We include the posting time in the corpus so that other researchers may ï¬lter as desired.
# 3.3 Dataset Statistics
Table 2 summarizes properties of the Ubuntu Dia- logue Corpus. One of the most important features | 1506.08909#17 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 18 | # 3.3 Dataset Statistics
Table 2 summarizes properties of the Ubuntu Dia- logue Corpus. One of the most important features
of the Ubuntu chat logs is its size. This is cru- cial for research into building dialogue managers based on neural architectures. Another important characteristic is the number of turns in these dia- logues. The distribution of the number of turns is shown in Figure 1. It can be seen that the num- ber of dialogues and turns per dialogue follow an approximate power law relationship.
# 3.4 Test Set Generation | 1506.08909#18 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 19 | # 3.4 Test Set Generation
We set aside 2% of the Ubuntu Dialogue Corpus conversations (randomly selected) to form a test set that can be used for evaluation of response se- lection algorithms. Compared to the rest of the corpus, this test set has been further processed to extract a pair of (context, response, ï¬ag) triples from each dialogue. The ï¬ag is a Boolean vari- able indicating whether or not the response was the actual next utterance after the given context. The response is a target (output) utterance which we aim to correctly identify. The context consists of the sequence of utterances appearing in dialogue prior to the response. We create a pair of triples, where one triple contains the correct response (i.e. the actual next utterance in the dialogue), and the other triple contains a false response, sampled ran- domly from elsewhere within the test set. The ï¬ag is set to 1 in the ï¬rst case and to 0 in the second case. An example pair is shown in Table 3. To make the task harder, we can move from pairs of responses (one correct, one incorrect) to a larger set of wrong responses (all with ï¬ag=0). In our experiments below, we consider both the case of 1 wrong response and 10 wrong responses. | 1506.08909#19 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 20 | Context well, can I move the drives? __EOS__ ah not like that well, can I move the drives? __EOS__ ah not like that Response I guess I could just get an enclosure and copy via USB you can use "ps ax" and "kill (PID #)" Flag 1 0
Table 3: Test set example with (context, reply, ï¬ag) format. The â__EOS__â tag is used to denote the end of an utterance within the context.
Since we want to learn to predict all parts of a conversation, as opposed to only the closing state- ment, we consider various portions of context for the conversations in the test set. The context size is determined stochastically using a simple formula:
c = min(t â 1, n â 1),
where n = 10C η + 2, η ⼠U nif (C/2, 10C)
Here, C denotes the maximum desired context size, which we set to C = 20. The last term is the desired minimum context size, which we set to be 2. Parameter t is the actual length of that dialogue (thus the constraint that c ⤠t â 1), and n is a random number corresponding to the ran- domly sampled context length, that is selected to be inversely proportional to C. | 1506.08909#20 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 21 | In practice, this leads to short test dialogues having short contexts, while longer dialogues are often broken into short or medium-length seg- ments, with the occasional long context of 10 or more turns.
# 3.5 Evaluation Metric
We consider the task of best response selection. This can be achieved by processing the data as de- scribed in Section 3.4, without requiring any hu- man labels. This classiï¬cation task is an adapta- tion of the recall and precision metrics previously applied to dialogue datasets [24].
A family of metrics often used in language tasks is Recall@k (denoted R@1 R@2, R@5 below). Here the agent is asked to select the k most likely responses, and it is correct if the true response is among these k candidates. Only the R@1 metric is relevant in the case of binary classiï¬cation (as in the Table 3 example).
Although a language model that performs well on response classiï¬cation is not a gauge of good performance on next utterance generation, we hy- pothesize that improvements on a model with re- gards to the classiï¬cation task will eventually lead to improvements for the generation task. See Sec- tion 6 for further discussion of this point.
# 4 Learning Architectures for Unstructured Dialogues | 1506.08909#21 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 22 | # 4 Learning Architectures for Unstructured Dialogues
To provide further evidence of the value of our dataset for research into neural architectures for dialogue managers, we provide performance benchmarks for two neural learning algorithms, as well as one naive baseline. The approaches con- sidered are: TF-IDF, Recurrent Neural networks (RNN), and Long Short-Term Memory (LSTM). Prior to applying each method, we perform stan- dard pre-processing of the data using the NLTK7 library and Twitter tokenizer8 to parse each utter- ance. We use generic tags for various word cat7www.nltk.org/ 8http://www.ark.cs.cmu.edu/TweetNLP/
egories, such as names, locations, organizations, URLs, and system paths. | 1506.08909#22 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 23 | egories, such as names, locations, organizations, URLs, and system paths.
To train the RNN and LSTM architectures, we process the full training Ubuntu Dialogue Corpus into the same format as the test set described in Section 3.4, extracting (context, response, ï¬ag) triples from dialogues. For the training set, we do not sample the context length, but instead con- sider each utterance (starting at the 3rd one) as a potential response, with the previous utterances as its context. So a dialogue of length 10 yields 8 training examples. Since these are overlapping, they are clearly not independent, but we consider this a minor issue given the size of the dataset (we further alleviate the issue by shufï¬ing the training examples). Negative responses are selected at ran- dom from the rest of the training data.
# 4.1 TF-IDF | 1506.08909#23 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 24 | # 4.1 TF-IDF
Term frequency-inverse document frequency is a statistic that intends to capture how important a given word is to some document, which in our case is the context [20]. It is a technique often used in document classiï¬cation and information retrieval. The âterm-frequencyâ term is simply a count of the number of times a word appears in a given context, while the âinverse document frequencyâ term puts a penalty on how often the word appears elsewhere in the corpus. The ï¬nal score is calculated as the product of these two terms, and has the form:
tï¬df(w, d, D) = f (w, d)Ãlog N |{d â D : w â d}| ,
where f (w, d) indicates the number of times word w appeared in context d, N is the total number of dialogues, and the denominator represents the number of dialogues in which the word w appears. For classiï¬cation, the TF-IDF vectors are ï¬rst calculated for the context and each of the candi- date responses. Given a set of candidate response vectors, the one with the highest cosine similarity to the context vector is selected as the output. For Recall@k, the top k responses are returned.
# 4.2 RNN | 1506.08909#24 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 25 | # 4.2 RNN
Recurrent neural networks are a variant of neural networks that allows for time-delayed directed cy- cles between units [17]. This leads to the forma- tion of an internal state of the network, ht, which allows it to model time-dependent data. The in- ternal state is updated at each time step as some
ho ho
Figure 2: Diagram of our model. The RNNs have tied weights. c, r are the last hidden states from the RNNs. ci, ri are word vectors for the context and response, i < t. We consider contexts up to a maximum of t = 160.
function of the observed variables xt, and the hid- den state at the previous time step htâ1. Wx and Wh are matrices associated with the input and hid- den state.
ht = f (Whhtâ1 + Wxxt). | 1506.08909#25 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 26 | ht = f (Whhtâ1 + Wxxt).
A diagram of an RNN can be seen in Figure 2. RNNs have been the primary building block of many current neural language models [22, 28], which use RNNs for an encoder and decoder. The ï¬rst RNN is used to encode the given context, and the second RNN generates a response by us- ing beam-search, where its initial hidden state is biased using the ï¬nal hidden state from the ï¬rst RNN. In our work, we are concerned with classi- ï¬cation of responses, instead of generation. We build upon the approach in [2], which has also been recently applied to the problem of question answering [33]. | 1506.08909#26 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 27 | We utilize a siamese network consisting of two RNNs with tied weights to produce the embed- dings for the context and response. Given some input context and response, we compute their em- beddings â c, r â Rd, respectively â by feeding the word embeddings one at a time into its respec- tive RNN. Word embeddings are initialized using the pre-trained vectors (Common Crawl, 840B to- kens from [19]), and ï¬ne-tuned during training. The hidden state of the RNN is updated at each step, and the ï¬nal hidden state represents a sum- mary of the input utterance. Using the ï¬nal hid- den states from both RNNs, we then calculate the probability that this is a valid pair:
p(ï¬ag = 1|c, r, M ) = Ï(cT M r + b),
where the bias b and the matrix M ¢ R?*@ are learned model parameters. This can be thought of as a generative approach; given some input re- sponse, we generate a context with the product c! = Mr, and measure the similarity to the actual context using the dot product. This is converted to a probability with the sigmoid function. The model is trained by minimizing the cross entropy of all labeled (context, response) pairs [33]: | 1506.08909#27 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 28 | j A 2 L=âJ log p( flag, len, tn, M) + mil = ||? n
where ||θ||2 F is the Frobenius norm of θ = {M, b}. In our experiments, we use λ = 0 for computa- tional simplicity.
For training, we used a 1:1 ratio between true re- sponses (ï¬ag = 1), and negative responses (ï¬ag=0) drawn randomly from elsewhere in the training set. The RNN architecture is set to 1 hidden layer with 50 neurons. The Wh matrix is initialized us- ing orthogonal weights [23], while Wx is initial- ized using a uniform distribution with values be- tween -0.01 and 0.01. We use Adam as our opti- mizer [15], with gradients clipped to 10. We found that weight initialization as well as the choice of optimizer were critical for training the RNNs.
# 4.3 LSTM | 1506.08909#28 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 29 | # 4.3 LSTM
In addition to the RNN model, we consider the same architecture but changed the hidden units to long-short term memory (LSTM) units [12]. LSTMs were introduced in order to model longer- term dependencies. This is accomplished using a series of gates that determine whether a new in- put should be remembered, forgotten (and the old value retained), or used as output. The error sig- nal can now be fed back indeï¬nitely into the gates of the LSTM unit. This helps overcome the van- ishing and exploding gradient problems in stan- dard RNNs, where the error gradients would oth- erwise decrease or increase at an exponential rate. In training, we used 1 hidden layer with 200 neu- rons. The hyper-parameter conï¬guration (includ- ing number of neurons) was optimized indepen- dently for RNNs and LSTMs using a validation set extracted from the training data.
# 5 Empirical Results
The results for the TF-IDF, RNN, and LSTM mod- els are shown in Table 4. The models were eval- uated using both 1 (1 in 2) and 9 (1 in 10) false | 1506.08909#29 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 31 | We observe that the LSTM outperforms both the RNN and TF-IDF on all evaluation metrics. It is interesting to note that TF-IDF actually out- performs the RNN on the Recall@1 case for the 1 in 10 classiï¬cation. This is most likely due to the limited ability of the RNN to take into account long contexts, which can be overcome by using the LSTM. An example output of the LSTM where the response is correctly classiï¬ed is shown in Table 5. We also show, in Figure 3, the increase in per- formance of the LSTM as the amount of data used for training increases. This conï¬rms the impor- tance of having a large training set.
Context ""any apache hax around ? i just deleted all of __path__ - which package provides it ?", "reconï¬guring apache do nât solve it ?" Ranked Responses 1. "does nât seem to, no" 2. "you can log in but not transfer ï¬les ?" Flag 1 0
Table 5: Example showing the ranked responses from the LSTM. Each utterance is shown after pre- processing steps.
# 6 Discussion | 1506.08909#31 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 32 | Table 5: Example showing the ranked responses from the LSTM. Each utterance is shown after pre- processing steps.
# 6 Discussion
This paper presents the Ubuntu Dialogue Corpus, a large dataset for research in unstructured multi- turn dialogue systems. We describe the construc- tion of the dataset and its properties. The availabil- ity of a dataset of this size opens up several inter- esting possibilities for research into dialogue sys- tems based on rich neural-network architectures. We present preliminary results demonstrating use of this dataset to train an RNN and an LSTM for the task of selecting the next best response in a
9Note that these results are on the original dataset. Results on the new dataset should not be compared to the old dataset; baselines on the new dataset will be released shortly.
0.65 Recall@1 for 1 in 10 classification & 0.35, 0 20000 40000 +â«-60000 +~â«80000+~=â«100000 +~â«120000 Number of dialogues used in training
Figure 3: The LSTM (with 200 hidden units), showing Recall@1 for the 1 in 10 classiï¬cation, with increasing dataset sizes.
conversation; we obtain signiï¬cantly better results with the LSTM architecture. There are several in- teresting directions for future work.
# 6.1 Conversation Disentanglement | 1506.08909#32 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 33 | # 6.1 Conversation Disentanglement
Our approach to conversation disentanglement consists of a small set of rules. More sophisticated techniques have been proposed, such as training a maximum-entropy classiï¬er to cluster utterances into separate dialogues [6]. However, since we are not trying to replicate the exact conversation between two users, but only to retrieve plausible natural dialogues, the heuristic method presented in this paper may be sufï¬cient. This seems sup- ported through qualitative examination of the data, but could be the subject of more formal evaluation.
# 6.2 Altering Test Set Difï¬culty
One of the interesting properties of the response selection task is the ability to alter the task dif- ï¬culty in a controlled manner. We demonstrated this by moving from 1 to 9 false responses, and by varying the Recall@k parameter. In the future, instead of choosing false responses randomly, we will consider selecting false responses that are similar to the actual response (e.g. as measured by cosine similarity). A dialogue model that performs well on this more difï¬cult task should also manage to capture a more ï¬ne-grained semantic meaning of sentences, as compared to a model that naively picks replies with the most words in common with the context such as TF-IDF.
# 6.3 State Tracking and Utterance Generation | 1506.08909#33 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 34 | The work described here focuses on the task of re- sponse selection. This can be seen as an interme- diate step between slot ï¬lling and utterance gener- ation. In slot ï¬lling, the set of candidate outputs (states) is identiï¬ed a priori through knowledge engineering, and is typically smaller than the set of responses considered in our work. When the set of candidate responses is close to the size of the dataset (e.g. all utterances ever recorded), then we are quite close to the response generation case. There are several reasons not to proceed directly to response generation. First, it is likely that cur- rent algorithms are not yet able to generate good results for this task, and it is preferable to tackle metrics for which we can make progress. Second, we do not yet have a suitable metric for evaluat- ing performance in the response generation case. One option is to use the BLEU [18] or METEOR [16] scores from machine translation. However, using BLEU to evaluate dialogue systems has been shown to give extremely low scores [28], due to the large space of potential sensible responses [7]. Further, since the BLEU score is calculated us- ing N-grams [18], it would provide a very low score for reasonable responses that do not have any words in common with the ground-truth next utterance. | 1506.08909#34 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 35 | Alternatively, one could measure the difference between the generated utterance and the actual sentence by comparing their representations in some embedding (or semantic) space. However, different models inevitably use different embed- dings, necessitating a standardized embedding for evaluation purposes. Such a standardized embed- dings has yet to be created.
Another possibility is to use human subjects to score automatically generated responses, but time and expense make this a highly impractical option. In summary, while it is possible that current lan- guage models have outgrown the use of slot ï¬ll- ing as a metric, we are currently unable to mea- sure their ability in next utterance generation in a standardized, meaningful and inexpensive way. This motivates our choice of response selection as a useful metric for the time being.
# Acknowledgments
The authors gratefully acknowledge ï¬nancial sup- port for this work by the Samsung Advanced Institute of Technology (SAIT) and the Natural
Sciences and Engineering Research Council of Canada (NSERC). We would like to thank Lau- rent Charlin for his input into this paper, as well as Gabriel Forgues and Eric Crawford for interesting discussions.
# References
[1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new Pattern Analysis and Ma- perspectives. chine Intelligence, IEEE Transactions on, 35(8):1798â1828, 2013. | 1506.08909#35 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 36 | [2] A. Bordes, J. Weston, and N. Usunier. Open question answering with weakly supervised embedding models. In MLKDD, pages 165â 180. Springer, 2014. J. Boyd-Graber, B. Satinoff, H. He, and H. Daume. Besting the quiz master: Crowd- sourcing incremental classiï¬cation games. In EMNLP, 2012.
[4] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Ben- gio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hier- archical image database. In CVPR, 2009. [6] M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversa- In ACL, pages 834â tion disentanglement. 842, 2008. | 1506.08909#36 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 37 | [7] M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. deltableu: A discriminative metric for generation tasks with intrinsically diverse arXiv preprint arXiv:1506.06863, targets. 2015. J.J. Godfrey, E.C. Holliman, and J. Mc- Switchboard: Telephone speech Daniel. corpus for research and development. In ICASSP, 1992.
[9] M. Henderson, B. Thomson, and J. Williams. Dialog state tracking challenge 2 & 3, 2014. [10] M. Henderson, B. Thomson, and J. Williams. The second dialog state tracking challenge. In SIGDIAL, page 263, 2014.
[11] M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In SIGDIAL, page 292, 2014.
[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. | 1506.08909#37 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 38 | [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[13] Dialog state tracking challenge 4. [14] S. Jafarpour, C. Burges, and A. Ritter. Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10, 2010.
Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[16] A. Lavie and M.J. Denkowski. The ME- TEOR metric for automatic evaluation of Machine Translation. Machine Translation, 23(2-3):105â115, 2009.
[17] L.R. Medsker and L.C. Jain. Recurrent neu- ral networks. Design and Applications, 2001. [18] K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. Bleu: a method for automatic evalua- tion of machine translation. In ACL, 2002.
[19] J. Pennington, R. Socher, and C.D. Manning. GloVe: Global Vectors for Word Representa- tion. In EMNLP, 2014. | 1506.08909#38 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 39 | [20] J. Ramos. Using tf-idf to determine word rel- evance in document queries. In ICML, 2003. [21] A. Ritter, C. Cherry, and W. Dolan. Unsu- pervised modeling of twitter conversations. 2010.
[22] A. Ritter, C. Cherry, and W. Dolan. Data- driven response generation in social media. In EMNLP, pages 583â593, 2011.
[23] A.M. Saxe, J.L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. [24] J. Schatzmann, K. Georgila, and S. Young. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In SIGDIAL, 2005.
Neural responding machine for short-text conver- arXiv preprint arXiv:1503.02364, sation. 2015.
[26] B. A. Shawar and E. Atwell. Chatbots: are In LDV Forum, vol- they really useful? ume 22, pages 29â49, 2007. | 1506.08909#39 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 40 | [27] S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue manage- ment with reinforcement learning: Experi- ments with the NJFun system. Journal of
Artiï¬cial Intelligence Research, 16:105â133, 2002.
[28] A. Sordoni, M. Galley, M. Auli, C. Brock- ett, Y. Ji, M. Mitchell, J.Y. Nie, J. Gao, and W. Dolan. A neural network approach to context-sensitive generation of conversa- tional responses. 2015.
[29] D.C. Uthus and D.W. Aha. Extending word highlighting in multiparticipant chat. Tech- nical report, DTIC Document, 2013.
[30] D.C. Uthus and D.W Aha. The ubuntu chat corpus for multiparticipant chat analysis. In AAAI Spring Symposium on Analyzing Mi- crotext, pages 99â102, 2013.
[31] H. Wang, Z. Lu, H. Li, and E. Chen. A dataset for research on short-text conversa- tions. In EMNLP, 2013. | 1506.08909#40 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.